Test Report: KVM_Linux_crio 21767

                    
                      05a109d80d7e573d35c6ebc91a1126cc576c7968:2025-10-18:41956
                    
                

Test fail (4/324)

Order failed test Duration
37 TestAddons/parallel/Ingress 159.99
124 TestFunctional/parallel/ImageCommands/ImageBuild 6.7
244 TestPreload 165.07
287 TestPause/serial/SecondStartNoReconfiguration 71.75
x
+
TestAddons/parallel/Ingress (159.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-493204 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-493204 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-493204 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [4867d583-45c3-4d54-ab34-d50cc052e2ca] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [4867d583-45c3-4d54-ab34-d50cc052e2ca] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.01567636s
I1018 08:33:28.636615    9956 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-493204 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-493204 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.900292454s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-493204 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-493204 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.58
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-493204 -n addons-493204
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-493204 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-493204 logs -n 25: (1.556284476s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                ARGS                                                                                                                                                                                                                                                │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-127464                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-127464 │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │ 18 Oct 25 08:29 UTC │
	│ start   │ --download-only -p binary-mirror-609442 --alsologtostderr --binary-mirror http://127.0.0.1:45099 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-609442 │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │                     │
	│ delete  │ -p binary-mirror-609442                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ binary-mirror-609442 │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │ 18 Oct 25 08:29 UTC │
	│ addons  │ disable dashboard -p addons-493204                                                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-493204        │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │                     │
	│ addons  │ enable dashboard -p addons-493204                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-493204        │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │                     │
	│ start   │ -p addons-493204 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-493204        │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │ 18 Oct 25 08:32 UTC │
	│ addons  │ addons-493204 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-493204        │ jenkins │ v1.37.0 │ 18 Oct 25 08:32 UTC │ 18 Oct 25 08:32 UTC │
	│ addons  │ addons-493204 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-493204        │ jenkins │ v1.37.0 │ 18 Oct 25 08:33 UTC │ 18 Oct 25 08:33 UTC │
	│ addons  │ enable headlamp -p addons-493204 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-493204        │ jenkins │ v1.37.0 │ 18 Oct 25 08:33 UTC │ 18 Oct 25 08:33 UTC │
	│ addons  │ addons-493204 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-493204        │ jenkins │ v1.37.0 │ 18 Oct 25 08:33 UTC │ 18 Oct 25 08:33 UTC │
	│ addons  │ addons-493204 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-493204        │ jenkins │ v1.37.0 │ 18 Oct 25 08:33 UTC │ 18 Oct 25 08:33 UTC │
	│ addons  │ addons-493204 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-493204        │ jenkins │ v1.37.0 │ 18 Oct 25 08:33 UTC │ 18 Oct 25 08:33 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-493204                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-493204        │ jenkins │ v1.37.0 │ 18 Oct 25 08:33 UTC │ 18 Oct 25 08:33 UTC │
	│ addons  │ addons-493204 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-493204        │ jenkins │ v1.37.0 │ 18 Oct 25 08:33 UTC │ 18 Oct 25 08:33 UTC │
	│ addons  │ addons-493204 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-493204        │ jenkins │ v1.37.0 │ 18 Oct 25 08:33 UTC │ 18 Oct 25 08:33 UTC │
	│ ip      │ addons-493204 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-493204        │ jenkins │ v1.37.0 │ 18 Oct 25 08:33 UTC │ 18 Oct 25 08:33 UTC │
	│ addons  │ addons-493204 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-493204        │ jenkins │ v1.37.0 │ 18 Oct 25 08:33 UTC │ 18 Oct 25 08:33 UTC │
	│ ssh     │ addons-493204 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-493204        │ jenkins │ v1.37.0 │ 18 Oct 25 08:33 UTC │                     │
	│ ssh     │ addons-493204 ssh cat /opt/local-path-provisioner/pvc-133026bd-3661-4364-b6b1-3e3ca819e2f7_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                                                  │ addons-493204        │ jenkins │ v1.37.0 │ 18 Oct 25 08:33 UTC │ 18 Oct 25 08:33 UTC │
	│ addons  │ addons-493204 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-493204        │ jenkins │ v1.37.0 │ 18 Oct 25 08:33 UTC │ 18 Oct 25 08:34 UTC │
	│ addons  │ addons-493204 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-493204        │ jenkins │ v1.37.0 │ 18 Oct 25 08:33 UTC │ 18 Oct 25 08:33 UTC │
	│ addons  │ addons-493204 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-493204        │ jenkins │ v1.37.0 │ 18 Oct 25 08:33 UTC │ 18 Oct 25 08:33 UTC │
	│ addons  │ addons-493204 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-493204        │ jenkins │ v1.37.0 │ 18 Oct 25 08:34 UTC │ 18 Oct 25 08:34 UTC │
	│ addons  │ addons-493204 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-493204        │ jenkins │ v1.37.0 │ 18 Oct 25 08:34 UTC │ 18 Oct 25 08:34 UTC │
	│ ip      │ addons-493204 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-493204        │ jenkins │ v1.37.0 │ 18 Oct 25 08:35 UTC │ 18 Oct 25 08:35 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 08:29:32
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 08:29:32.929890   10589 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:29:32.930167   10589 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:29:32.930184   10589 out.go:374] Setting ErrFile to fd 2...
	I1018 08:29:32.930190   10589 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:29:32.930386   10589 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-6063/.minikube/bin
	I1018 08:29:32.930935   10589 out.go:368] Setting JSON to false
	I1018 08:29:32.931758   10589 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":723,"bootTime":1760775450,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 08:29:32.931832   10589 start.go:141] virtualization: kvm guest
	I1018 08:29:32.933810   10589 out.go:179] * [addons-493204] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 08:29:32.935259   10589 notify.go:220] Checking for updates...
	I1018 08:29:32.935268   10589 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 08:29:32.936714   10589 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 08:29:32.938409   10589 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-6063/kubeconfig
	I1018 08:29:32.939720   10589 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-6063/.minikube
	I1018 08:29:32.943646   10589 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 08:29:32.945308   10589 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 08:29:32.947097   10589 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 08:29:32.979209   10589 out.go:179] * Using the kvm2 driver based on user configuration
	I1018 08:29:32.980607   10589 start.go:305] selected driver: kvm2
	I1018 08:29:32.980627   10589 start.go:925] validating driver "kvm2" against <nil>
	I1018 08:29:32.980638   10589 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 08:29:32.981408   10589 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 08:29:32.981486   10589 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21767-6063/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 08:29:32.996066   10589 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 08:29:32.996101   10589 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21767-6063/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 08:29:33.010592   10589 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 08:29:33.010638   10589 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 08:29:33.010899   10589 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 08:29:33.010947   10589 cni.go:84] Creating CNI manager for ""
	I1018 08:29:33.011005   10589 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 08:29:33.011013   10589 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1018 08:29:33.011062   10589 start.go:349] cluster config:
	{Name:addons-493204 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-493204 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1018 08:29:33.011157   10589 iso.go:125] acquiring lock: {Name:mk5e486e8f05c541fb7f7e8ec869cafc091f385a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 08:29:33.013125   10589 out.go:179] * Starting "addons-493204" primary control-plane node in "addons-493204" cluster
	I1018 08:29:33.014440   10589 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 08:29:33.014488   10589 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-6063/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 08:29:33.014496   10589 cache.go:58] Caching tarball of preloaded images
	I1018 08:29:33.014591   10589 preload.go:233] Found /home/jenkins/minikube-integration/21767-6063/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 08:29:33.014603   10589 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 08:29:33.014942   10589 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/config.json ...
	I1018 08:29:33.014996   10589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/config.json: {Name:mkb49463162f732620dae19024c67eaf5d3e2e4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:29:33.015144   10589 start.go:360] acquireMachinesLock for addons-493204: {Name:mk264c321ec76ef9ad1eaece53fae2e5807c459a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1018 08:29:33.015192   10589 start.go:364] duration metric: took 35.274µs to acquireMachinesLock for "addons-493204"
	I1018 08:29:33.015210   10589 start.go:93] Provisioning new machine with config: &{Name:addons-493204 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-493204 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 08:29:33.015270   10589 start.go:125] createHost starting for "" (driver="kvm2")
	I1018 08:29:33.017009   10589 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1018 08:29:33.017146   10589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:29:33.017184   10589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:29:33.031099   10589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34297
	I1018 08:29:33.031627   10589 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:29:33.032193   10589 main.go:141] libmachine: Using API Version  1
	I1018 08:29:33.032221   10589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:29:33.032560   10589 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:29:33.032748   10589 main.go:141] libmachine: (addons-493204) Calling .GetMachineName
	I1018 08:29:33.032879   10589 main.go:141] libmachine: (addons-493204) Calling .DriverName
	I1018 08:29:33.033018   10589 start.go:159] libmachine.API.Create for "addons-493204" (driver="kvm2")
	I1018 08:29:33.033058   10589 client.go:168] LocalClient.Create starting
	I1018 08:29:33.033113   10589 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21767-6063/.minikube/certs/ca.pem
	I1018 08:29:33.176286   10589 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21767-6063/.minikube/certs/cert.pem
	I1018 08:29:33.295349   10589 main.go:141] libmachine: Running pre-create checks...
	I1018 08:29:33.295375   10589 main.go:141] libmachine: (addons-493204) Calling .PreCreateCheck
	I1018 08:29:33.295941   10589 main.go:141] libmachine: (addons-493204) Calling .GetConfigRaw
	I1018 08:29:33.296500   10589 main.go:141] libmachine: Creating machine...
	I1018 08:29:33.296523   10589 main.go:141] libmachine: (addons-493204) Calling .Create
	I1018 08:29:33.296691   10589 main.go:141] libmachine: (addons-493204) creating domain...
	I1018 08:29:33.296707   10589 main.go:141] libmachine: (addons-493204) creating network...
	I1018 08:29:33.298259   10589 main.go:141] libmachine: (addons-493204) DBG | found existing default network
	I1018 08:29:33.298358   10589 main.go:141] libmachine: (addons-493204) DBG | <network>
	I1018 08:29:33.298380   10589 main.go:141] libmachine: (addons-493204) DBG |   <name>default</name>
	I1018 08:29:33.298395   10589 main.go:141] libmachine: (addons-493204) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1018 08:29:33.298408   10589 main.go:141] libmachine: (addons-493204) DBG |   <forward mode='nat'>
	I1018 08:29:33.298420   10589 main.go:141] libmachine: (addons-493204) DBG |     <nat>
	I1018 08:29:33.298429   10589 main.go:141] libmachine: (addons-493204) DBG |       <port start='1024' end='65535'/>
	I1018 08:29:33.298441   10589 main.go:141] libmachine: (addons-493204) DBG |     </nat>
	I1018 08:29:33.298452   10589 main.go:141] libmachine: (addons-493204) DBG |   </forward>
	I1018 08:29:33.298466   10589 main.go:141] libmachine: (addons-493204) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1018 08:29:33.298477   10589 main.go:141] libmachine: (addons-493204) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1018 08:29:33.298489   10589 main.go:141] libmachine: (addons-493204) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1018 08:29:33.298498   10589 main.go:141] libmachine: (addons-493204) DBG |     <dhcp>
	I1018 08:29:33.298511   10589 main.go:141] libmachine: (addons-493204) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1018 08:29:33.298528   10589 main.go:141] libmachine: (addons-493204) DBG |     </dhcp>
	I1018 08:29:33.298538   10589 main.go:141] libmachine: (addons-493204) DBG |   </ip>
	I1018 08:29:33.298550   10589 main.go:141] libmachine: (addons-493204) DBG | </network>
	I1018 08:29:33.298559   10589 main.go:141] libmachine: (addons-493204) DBG | 
	I1018 08:29:33.299049   10589 main.go:141] libmachine: (addons-493204) DBG | I1018 08:29:33.298882   10617 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123550}
	I1018 08:29:33.299119   10589 main.go:141] libmachine: (addons-493204) DBG | defining private network:
	I1018 08:29:33.299145   10589 main.go:141] libmachine: (addons-493204) DBG | 
	I1018 08:29:33.299160   10589 main.go:141] libmachine: (addons-493204) DBG | <network>
	I1018 08:29:33.299178   10589 main.go:141] libmachine: (addons-493204) DBG |   <name>mk-addons-493204</name>
	I1018 08:29:33.299209   10589 main.go:141] libmachine: (addons-493204) DBG |   <dns enable='no'/>
	I1018 08:29:33.299236   10589 main.go:141] libmachine: (addons-493204) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1018 08:29:33.299264   10589 main.go:141] libmachine: (addons-493204) DBG |     <dhcp>
	I1018 08:29:33.299286   10589 main.go:141] libmachine: (addons-493204) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1018 08:29:33.299299   10589 main.go:141] libmachine: (addons-493204) DBG |     </dhcp>
	I1018 08:29:33.299308   10589 main.go:141] libmachine: (addons-493204) DBG |   </ip>
	I1018 08:29:33.299316   10589 main.go:141] libmachine: (addons-493204) DBG | </network>
	I1018 08:29:33.299323   10589 main.go:141] libmachine: (addons-493204) DBG | 
	I1018 08:29:33.305755   10589 main.go:141] libmachine: (addons-493204) DBG | creating private network mk-addons-493204 192.168.39.0/24...
	I1018 08:29:33.376181   10589 main.go:141] libmachine: (addons-493204) DBG | private network mk-addons-493204 192.168.39.0/24 created
	I1018 08:29:33.376497   10589 main.go:141] libmachine: (addons-493204) DBG | <network>
	I1018 08:29:33.376515   10589 main.go:141] libmachine: (addons-493204) DBG |   <name>mk-addons-493204</name>
	I1018 08:29:33.376526   10589 main.go:141] libmachine: (addons-493204) setting up store path in /home/jenkins/minikube-integration/21767-6063/.minikube/machines/addons-493204 ...
	I1018 08:29:33.376535   10589 main.go:141] libmachine: (addons-493204) DBG |   <uuid>6648ecbe-fdb6-4deb-aab7-9ec5736dad30</uuid>
	I1018 08:29:33.376544   10589 main.go:141] libmachine: (addons-493204) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I1018 08:29:33.376551   10589 main.go:141] libmachine: (addons-493204) DBG |   <mac address='52:54:00:12:c6:5c'/>
	I1018 08:29:33.376558   10589 main.go:141] libmachine: (addons-493204) DBG |   <dns enable='no'/>
	I1018 08:29:33.376566   10589 main.go:141] libmachine: (addons-493204) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1018 08:29:33.376579   10589 main.go:141] libmachine: (addons-493204) DBG |     <dhcp>
	I1018 08:29:33.376587   10589 main.go:141] libmachine: (addons-493204) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1018 08:29:33.376600   10589 main.go:141] libmachine: (addons-493204) building disk image from file:///home/jenkins/minikube-integration/21767-6063/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1018 08:29:33.376608   10589 main.go:141] libmachine: (addons-493204) DBG |     </dhcp>
	I1018 08:29:33.376618   10589 main.go:141] libmachine: (addons-493204) DBG |   </ip>
	I1018 08:29:33.376625   10589 main.go:141] libmachine: (addons-493204) DBG | </network>
	I1018 08:29:33.376632   10589 main.go:141] libmachine: (addons-493204) DBG | 
	I1018 08:29:33.376648   10589 main.go:141] libmachine: (addons-493204) DBG | I1018 08:29:33.376490   10617 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21767-6063/.minikube
	I1018 08:29:33.376680   10589 main.go:141] libmachine: (addons-493204) Downloading /home/jenkins/minikube-integration/21767-6063/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21767-6063/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso...
	I1018 08:29:33.636699   10589 main.go:141] libmachine: (addons-493204) DBG | I1018 08:29:33.636551   10617 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21767-6063/.minikube/machines/addons-493204/id_rsa...
	I1018 08:29:34.217641   10589 main.go:141] libmachine: (addons-493204) DBG | I1018 08:29:34.217496   10617 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21767-6063/.minikube/machines/addons-493204/addons-493204.rawdisk...
	I1018 08:29:34.217680   10589 main.go:141] libmachine: (addons-493204) DBG | Writing magic tar header
	I1018 08:29:34.217706   10589 main.go:141] libmachine: (addons-493204) DBG | Writing SSH key tar header
	I1018 08:29:34.217721   10589 main.go:141] libmachine: (addons-493204) DBG | I1018 08:29:34.217614   10617 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21767-6063/.minikube/machines/addons-493204 ...
	I1018 08:29:34.217734   10589 main.go:141] libmachine: (addons-493204) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21767-6063/.minikube/machines/addons-493204
	I1018 08:29:34.217753   10589 main.go:141] libmachine: (addons-493204) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21767-6063/.minikube/machines
	I1018 08:29:34.217783   10589 main.go:141] libmachine: (addons-493204) setting executable bit set on /home/jenkins/minikube-integration/21767-6063/.minikube/machines/addons-493204 (perms=drwx------)
	I1018 08:29:34.217809   10589 main.go:141] libmachine: (addons-493204) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21767-6063/.minikube
	I1018 08:29:34.217817   10589 main.go:141] libmachine: (addons-493204) setting executable bit set on /home/jenkins/minikube-integration/21767-6063/.minikube/machines (perms=drwxr-xr-x)
	I1018 08:29:34.217832   10589 main.go:141] libmachine: (addons-493204) setting executable bit set on /home/jenkins/minikube-integration/21767-6063/.minikube (perms=drwxr-xr-x)
	I1018 08:29:34.217842   10589 main.go:141] libmachine: (addons-493204) setting executable bit set on /home/jenkins/minikube-integration/21767-6063 (perms=drwxrwxr-x)
	I1018 08:29:34.217848   10589 main.go:141] libmachine: (addons-493204) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21767-6063
	I1018 08:29:34.217855   10589 main.go:141] libmachine: (addons-493204) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1018 08:29:34.217861   10589 main.go:141] libmachine: (addons-493204) DBG | checking permissions on dir: /home/jenkins
	I1018 08:29:34.217867   10589 main.go:141] libmachine: (addons-493204) DBG | checking permissions on dir: /home
	I1018 08:29:34.217872   10589 main.go:141] libmachine: (addons-493204) DBG | skipping /home - not owner
	I1018 08:29:34.217888   10589 main.go:141] libmachine: (addons-493204) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1018 08:29:34.217900   10589 main.go:141] libmachine: (addons-493204) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1018 08:29:34.217915   10589 main.go:141] libmachine: (addons-493204) defining domain...
	I1018 08:29:34.218933   10589 main.go:141] libmachine: (addons-493204) defining domain using XML: 
	I1018 08:29:34.218953   10589 main.go:141] libmachine: (addons-493204) <domain type='kvm'>
	I1018 08:29:34.218966   10589 main.go:141] libmachine: (addons-493204)   <name>addons-493204</name>
	I1018 08:29:34.218977   10589 main.go:141] libmachine: (addons-493204)   <memory unit='MiB'>4096</memory>
	I1018 08:29:34.218989   10589 main.go:141] libmachine: (addons-493204)   <vcpu>2</vcpu>
	I1018 08:29:34.218998   10589 main.go:141] libmachine: (addons-493204)   <features>
	I1018 08:29:34.219006   10589 main.go:141] libmachine: (addons-493204)     <acpi/>
	I1018 08:29:34.219011   10589 main.go:141] libmachine: (addons-493204)     <apic/>
	I1018 08:29:34.219016   10589 main.go:141] libmachine: (addons-493204)     <pae/>
	I1018 08:29:34.219020   10589 main.go:141] libmachine: (addons-493204)   </features>
	I1018 08:29:34.219025   10589 main.go:141] libmachine: (addons-493204)   <cpu mode='host-passthrough'>
	I1018 08:29:34.219029   10589 main.go:141] libmachine: (addons-493204)   </cpu>
	I1018 08:29:34.219033   10589 main.go:141] libmachine: (addons-493204)   <os>
	I1018 08:29:34.219044   10589 main.go:141] libmachine: (addons-493204)     <type>hvm</type>
	I1018 08:29:34.219052   10589 main.go:141] libmachine: (addons-493204)     <boot dev='cdrom'/>
	I1018 08:29:34.219065   10589 main.go:141] libmachine: (addons-493204)     <boot dev='hd'/>
	I1018 08:29:34.219073   10589 main.go:141] libmachine: (addons-493204)     <bootmenu enable='no'/>
	I1018 08:29:34.219076   10589 main.go:141] libmachine: (addons-493204)   </os>
	I1018 08:29:34.219082   10589 main.go:141] libmachine: (addons-493204)   <devices>
	I1018 08:29:34.219087   10589 main.go:141] libmachine: (addons-493204)     <disk type='file' device='cdrom'>
	I1018 08:29:34.219095   10589 main.go:141] libmachine: (addons-493204)       <source file='/home/jenkins/minikube-integration/21767-6063/.minikube/machines/addons-493204/boot2docker.iso'/>
	I1018 08:29:34.219100   10589 main.go:141] libmachine: (addons-493204)       <target dev='hdc' bus='scsi'/>
	I1018 08:29:34.219107   10589 main.go:141] libmachine: (addons-493204)       <readonly/>
	I1018 08:29:34.219123   10589 main.go:141] libmachine: (addons-493204)     </disk>
	I1018 08:29:34.219131   10589 main.go:141] libmachine: (addons-493204)     <disk type='file' device='disk'>
	I1018 08:29:34.219137   10589 main.go:141] libmachine: (addons-493204)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1018 08:29:34.219144   10589 main.go:141] libmachine: (addons-493204)       <source file='/home/jenkins/minikube-integration/21767-6063/.minikube/machines/addons-493204/addons-493204.rawdisk'/>
	I1018 08:29:34.219149   10589 main.go:141] libmachine: (addons-493204)       <target dev='hda' bus='virtio'/>
	I1018 08:29:34.219153   10589 main.go:141] libmachine: (addons-493204)     </disk>
	I1018 08:29:34.219157   10589 main.go:141] libmachine: (addons-493204)     <interface type='network'>
	I1018 08:29:34.219163   10589 main.go:141] libmachine: (addons-493204)       <source network='mk-addons-493204'/>
	I1018 08:29:34.219169   10589 main.go:141] libmachine: (addons-493204)       <model type='virtio'/>
	I1018 08:29:34.219173   10589 main.go:141] libmachine: (addons-493204)     </interface>
	I1018 08:29:34.219177   10589 main.go:141] libmachine: (addons-493204)     <interface type='network'>
	I1018 08:29:34.219182   10589 main.go:141] libmachine: (addons-493204)       <source network='default'/>
	I1018 08:29:34.219191   10589 main.go:141] libmachine: (addons-493204)       <model type='virtio'/>
	I1018 08:29:34.219197   10589 main.go:141] libmachine: (addons-493204)     </interface>
	I1018 08:29:34.219203   10589 main.go:141] libmachine: (addons-493204)     <serial type='pty'>
	I1018 08:29:34.219208   10589 main.go:141] libmachine: (addons-493204)       <target port='0'/>
	I1018 08:29:34.219214   10589 main.go:141] libmachine: (addons-493204)     </serial>
	I1018 08:29:34.219219   10589 main.go:141] libmachine: (addons-493204)     <console type='pty'>
	I1018 08:29:34.219223   10589 main.go:141] libmachine: (addons-493204)       <target type='serial' port='0'/>
	I1018 08:29:34.219230   10589 main.go:141] libmachine: (addons-493204)     </console>
	I1018 08:29:34.219234   10589 main.go:141] libmachine: (addons-493204)     <rng model='virtio'>
	I1018 08:29:34.219242   10589 main.go:141] libmachine: (addons-493204)       <backend model='random'>/dev/random</backend>
	I1018 08:29:34.219246   10589 main.go:141] libmachine: (addons-493204)     </rng>
	I1018 08:29:34.219251   10589 main.go:141] libmachine: (addons-493204)   </devices>
	I1018 08:29:34.219260   10589 main.go:141] libmachine: (addons-493204) </domain>
	I1018 08:29:34.219269   10589 main.go:141] libmachine: (addons-493204) 
	I1018 08:29:34.227488   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:e6:4d:49 in network default
	I1018 08:29:34.228071   10589 main.go:141] libmachine: (addons-493204) starting domain...
	I1018 08:29:34.228090   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:34.228097   10589 main.go:141] libmachine: (addons-493204) ensuring networks are active...
	I1018 08:29:34.228827   10589 main.go:141] libmachine: (addons-493204) Ensuring network default is active
	I1018 08:29:34.229181   10589 main.go:141] libmachine: (addons-493204) Ensuring network mk-addons-493204 is active
	I1018 08:29:34.230425   10589 main.go:141] libmachine: (addons-493204) getting domain XML...
	I1018 08:29:34.231368   10589 main.go:141] libmachine: (addons-493204) DBG | starting domain XML:
	I1018 08:29:34.231390   10589 main.go:141] libmachine: (addons-493204) DBG | <domain type='kvm'>
	I1018 08:29:34.231400   10589 main.go:141] libmachine: (addons-493204) DBG |   <name>addons-493204</name>
	I1018 08:29:34.231408   10589 main.go:141] libmachine: (addons-493204) DBG |   <uuid>83a766df-a3e9-41ac-b9b8-4ce5492a5b47</uuid>
	I1018 08:29:34.231432   10589 main.go:141] libmachine: (addons-493204) DBG |   <memory unit='KiB'>4194304</memory>
	I1018 08:29:34.231451   10589 main.go:141] libmachine: (addons-493204) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I1018 08:29:34.231478   10589 main.go:141] libmachine: (addons-493204) DBG |   <vcpu placement='static'>2</vcpu>
	I1018 08:29:34.231497   10589 main.go:141] libmachine: (addons-493204) DBG |   <os>
	I1018 08:29:34.231505   10589 main.go:141] libmachine: (addons-493204) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1018 08:29:34.231509   10589 main.go:141] libmachine: (addons-493204) DBG |     <boot dev='cdrom'/>
	I1018 08:29:34.231515   10589 main.go:141] libmachine: (addons-493204) DBG |     <boot dev='hd'/>
	I1018 08:29:34.231519   10589 main.go:141] libmachine: (addons-493204) DBG |     <bootmenu enable='no'/>
	I1018 08:29:34.231524   10589 main.go:141] libmachine: (addons-493204) DBG |   </os>
	I1018 08:29:34.231534   10589 main.go:141] libmachine: (addons-493204) DBG |   <features>
	I1018 08:29:34.231542   10589 main.go:141] libmachine: (addons-493204) DBG |     <acpi/>
	I1018 08:29:34.231552   10589 main.go:141] libmachine: (addons-493204) DBG |     <apic/>
	I1018 08:29:34.231561   10589 main.go:141] libmachine: (addons-493204) DBG |     <pae/>
	I1018 08:29:34.231570   10589 main.go:141] libmachine: (addons-493204) DBG |   </features>
	I1018 08:29:34.231577   10589 main.go:141] libmachine: (addons-493204) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1018 08:29:34.231600   10589 main.go:141] libmachine: (addons-493204) DBG |   <clock offset='utc'/>
	I1018 08:29:34.231608   10589 main.go:141] libmachine: (addons-493204) DBG |   <on_poweroff>destroy</on_poweroff>
	I1018 08:29:34.231613   10589 main.go:141] libmachine: (addons-493204) DBG |   <on_reboot>restart</on_reboot>
	I1018 08:29:34.231624   10589 main.go:141] libmachine: (addons-493204) DBG |   <on_crash>destroy</on_crash>
	I1018 08:29:34.231634   10589 main.go:141] libmachine: (addons-493204) DBG |   <devices>
	I1018 08:29:34.231647   10589 main.go:141] libmachine: (addons-493204) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1018 08:29:34.231657   10589 main.go:141] libmachine: (addons-493204) DBG |     <disk type='file' device='cdrom'>
	I1018 08:29:34.231675   10589 main.go:141] libmachine: (addons-493204) DBG |       <driver name='qemu' type='raw'/>
	I1018 08:29:34.231690   10589 main.go:141] libmachine: (addons-493204) DBG |       <source file='/home/jenkins/minikube-integration/21767-6063/.minikube/machines/addons-493204/boot2docker.iso'/>
	I1018 08:29:34.231700   10589 main.go:141] libmachine: (addons-493204) DBG |       <target dev='hdc' bus='scsi'/>
	I1018 08:29:34.231708   10589 main.go:141] libmachine: (addons-493204) DBG |       <readonly/>
	I1018 08:29:34.231736   10589 main.go:141] libmachine: (addons-493204) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1018 08:29:34.231747   10589 main.go:141] libmachine: (addons-493204) DBG |     </disk>
	I1018 08:29:34.231764   10589 main.go:141] libmachine: (addons-493204) DBG |     <disk type='file' device='disk'>
	I1018 08:29:34.231779   10589 main.go:141] libmachine: (addons-493204) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1018 08:29:34.231793   10589 main.go:141] libmachine: (addons-493204) DBG |       <source file='/home/jenkins/minikube-integration/21767-6063/.minikube/machines/addons-493204/addons-493204.rawdisk'/>
	I1018 08:29:34.231808   10589 main.go:141] libmachine: (addons-493204) DBG |       <target dev='hda' bus='virtio'/>
	I1018 08:29:34.231822   10589 main.go:141] libmachine: (addons-493204) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1018 08:29:34.231840   10589 main.go:141] libmachine: (addons-493204) DBG |     </disk>
	I1018 08:29:34.231850   10589 main.go:141] libmachine: (addons-493204) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1018 08:29:34.231858   10589 main.go:141] libmachine: (addons-493204) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1018 08:29:34.231863   10589 main.go:141] libmachine: (addons-493204) DBG |     </controller>
	I1018 08:29:34.231871   10589 main.go:141] libmachine: (addons-493204) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1018 08:29:34.231877   10589 main.go:141] libmachine: (addons-493204) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1018 08:29:34.231888   10589 main.go:141] libmachine: (addons-493204) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1018 08:29:34.231903   10589 main.go:141] libmachine: (addons-493204) DBG |     </controller>
	I1018 08:29:34.231932   10589 main.go:141] libmachine: (addons-493204) DBG |     <interface type='network'>
	I1018 08:29:34.231946   10589 main.go:141] libmachine: (addons-493204) DBG |       <mac address='52:54:00:48:27:75'/>
	I1018 08:29:34.231956   10589 main.go:141] libmachine: (addons-493204) DBG |       <source network='mk-addons-493204'/>
	I1018 08:29:34.231964   10589 main.go:141] libmachine: (addons-493204) DBG |       <model type='virtio'/>
	I1018 08:29:34.231980   10589 main.go:141] libmachine: (addons-493204) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1018 08:29:34.231987   10589 main.go:141] libmachine: (addons-493204) DBG |     </interface>
	I1018 08:29:34.231995   10589 main.go:141] libmachine: (addons-493204) DBG |     <interface type='network'>
	I1018 08:29:34.231999   10589 main.go:141] libmachine: (addons-493204) DBG |       <mac address='52:54:00:e6:4d:49'/>
	I1018 08:29:34.232006   10589 main.go:141] libmachine: (addons-493204) DBG |       <source network='default'/>
	I1018 08:29:34.232018   10589 main.go:141] libmachine: (addons-493204) DBG |       <model type='virtio'/>
	I1018 08:29:34.232028   10589 main.go:141] libmachine: (addons-493204) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1018 08:29:34.232037   10589 main.go:141] libmachine: (addons-493204) DBG |     </interface>
	I1018 08:29:34.232046   10589 main.go:141] libmachine: (addons-493204) DBG |     <serial type='pty'>
	I1018 08:29:34.232058   10589 main.go:141] libmachine: (addons-493204) DBG |       <target type='isa-serial' port='0'>
	I1018 08:29:34.232067   10589 main.go:141] libmachine: (addons-493204) DBG |         <model name='isa-serial'/>
	I1018 08:29:34.232076   10589 main.go:141] libmachine: (addons-493204) DBG |       </target>
	I1018 08:29:34.232086   10589 main.go:141] libmachine: (addons-493204) DBG |     </serial>
	I1018 08:29:34.232095   10589 main.go:141] libmachine: (addons-493204) DBG |     <console type='pty'>
	I1018 08:29:34.232100   10589 main.go:141] libmachine: (addons-493204) DBG |       <target type='serial' port='0'/>
	I1018 08:29:34.232114   10589 main.go:141] libmachine: (addons-493204) DBG |     </console>
	I1018 08:29:34.232123   10589 main.go:141] libmachine: (addons-493204) DBG |     <input type='mouse' bus='ps2'/>
	I1018 08:29:34.232127   10589 main.go:141] libmachine: (addons-493204) DBG |     <input type='keyboard' bus='ps2'/>
	I1018 08:29:34.232135   10589 main.go:141] libmachine: (addons-493204) DBG |     <audio id='1' type='none'/>
	I1018 08:29:34.232140   10589 main.go:141] libmachine: (addons-493204) DBG |     <memballoon model='virtio'>
	I1018 08:29:34.232151   10589 main.go:141] libmachine: (addons-493204) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1018 08:29:34.232158   10589 main.go:141] libmachine: (addons-493204) DBG |     </memballoon>
	I1018 08:29:34.232163   10589 main.go:141] libmachine: (addons-493204) DBG |     <rng model='virtio'>
	I1018 08:29:34.232170   10589 main.go:141] libmachine: (addons-493204) DBG |       <backend model='random'>/dev/random</backend>
	I1018 08:29:34.232176   10589 main.go:141] libmachine: (addons-493204) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1018 08:29:34.232185   10589 main.go:141] libmachine: (addons-493204) DBG |     </rng>
	I1018 08:29:34.232190   10589 main.go:141] libmachine: (addons-493204) DBG |   </devices>
	I1018 08:29:34.232199   10589 main.go:141] libmachine: (addons-493204) DBG | </domain>
	I1018 08:29:34.232207   10589 main.go:141] libmachine: (addons-493204) DBG | 
	I1018 08:29:35.581976   10589 main.go:141] libmachine: (addons-493204) waiting for domain to start...
	I1018 08:29:35.583514   10589 main.go:141] libmachine: (addons-493204) domain is now running
	I1018 08:29:35.583545   10589 main.go:141] libmachine: (addons-493204) waiting for IP...
	I1018 08:29:35.584355   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:35.584899   10589 main.go:141] libmachine: (addons-493204) DBG | no network interface addresses found for domain addons-493204 (source=lease)
	I1018 08:29:35.584916   10589 main.go:141] libmachine: (addons-493204) DBG | trying to list again with source=arp
	I1018 08:29:35.585270   10589 main.go:141] libmachine: (addons-493204) DBG | unable to find current IP address of domain addons-493204 in network mk-addons-493204 (interfaces detected: [])
	I1018 08:29:35.585319   10589 main.go:141] libmachine: (addons-493204) DBG | I1018 08:29:35.585260   10617 retry.go:31] will retry after 205.886562ms: waiting for domain to come up
	I1018 08:29:35.794432   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:35.795025   10589 main.go:141] libmachine: (addons-493204) DBG | no network interface addresses found for domain addons-493204 (source=lease)
	I1018 08:29:35.795042   10589 main.go:141] libmachine: (addons-493204) DBG | trying to list again with source=arp
	I1018 08:29:35.795332   10589 main.go:141] libmachine: (addons-493204) DBG | unable to find current IP address of domain addons-493204 in network mk-addons-493204 (interfaces detected: [])
	I1018 08:29:35.795406   10589 main.go:141] libmachine: (addons-493204) DBG | I1018 08:29:35.795343   10617 retry.go:31] will retry after 388.702997ms: waiting for domain to come up
	I1018 08:29:36.186338   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:36.187067   10589 main.go:141] libmachine: (addons-493204) DBG | no network interface addresses found for domain addons-493204 (source=lease)
	I1018 08:29:36.187643   10589 main.go:141] libmachine: (addons-493204) DBG | trying to list again with source=arp
	I1018 08:29:36.187677   10589 main.go:141] libmachine: (addons-493204) DBG | unable to find current IP address of domain addons-493204 in network mk-addons-493204 (interfaces detected: [])
	I1018 08:29:36.187710   10589 main.go:141] libmachine: (addons-493204) DBG | I1018 08:29:36.187423   10617 retry.go:31] will retry after 480.866685ms: waiting for domain to come up
	I1018 08:29:36.670160   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:36.670663   10589 main.go:141] libmachine: (addons-493204) DBG | no network interface addresses found for domain addons-493204 (source=lease)
	I1018 08:29:36.670689   10589 main.go:141] libmachine: (addons-493204) DBG | trying to list again with source=arp
	I1018 08:29:36.670962   10589 main.go:141] libmachine: (addons-493204) DBG | unable to find current IP address of domain addons-493204 in network mk-addons-493204 (interfaces detected: [])
	I1018 08:29:36.670989   10589 main.go:141] libmachine: (addons-493204) DBG | I1018 08:29:36.670915   10617 retry.go:31] will retry after 483.189115ms: waiting for domain to come up
	I1018 08:29:37.155841   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:37.156554   10589 main.go:141] libmachine: (addons-493204) DBG | no network interface addresses found for domain addons-493204 (source=lease)
	I1018 08:29:37.156576   10589 main.go:141] libmachine: (addons-493204) DBG | trying to list again with source=arp
	I1018 08:29:37.156962   10589 main.go:141] libmachine: (addons-493204) DBG | unable to find current IP address of domain addons-493204 in network mk-addons-493204 (interfaces detected: [])
	I1018 08:29:37.157037   10589 main.go:141] libmachine: (addons-493204) DBG | I1018 08:29:37.156948   10617 retry.go:31] will retry after 758.672189ms: waiting for domain to come up
	I1018 08:29:37.917305   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:37.918077   10589 main.go:141] libmachine: (addons-493204) DBG | no network interface addresses found for domain addons-493204 (source=lease)
	I1018 08:29:37.918101   10589 main.go:141] libmachine: (addons-493204) DBG | trying to list again with source=arp
	I1018 08:29:37.918414   10589 main.go:141] libmachine: (addons-493204) DBG | unable to find current IP address of domain addons-493204 in network mk-addons-493204 (interfaces detected: [])
	I1018 08:29:37.918448   10589 main.go:141] libmachine: (addons-493204) DBG | I1018 08:29:37.918405   10617 retry.go:31] will retry after 619.834298ms: waiting for domain to come up
	I1018 08:29:38.540188   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:38.540673   10589 main.go:141] libmachine: (addons-493204) DBG | no network interface addresses found for domain addons-493204 (source=lease)
	I1018 08:29:38.540701   10589 main.go:141] libmachine: (addons-493204) DBG | trying to list again with source=arp
	I1018 08:29:38.540982   10589 main.go:141] libmachine: (addons-493204) DBG | unable to find current IP address of domain addons-493204 in network mk-addons-493204 (interfaces detected: [])
	I1018 08:29:38.541067   10589 main.go:141] libmachine: (addons-493204) DBG | I1018 08:29:38.540974   10617 retry.go:31] will retry after 788.584626ms: waiting for domain to come up
	I1018 08:29:39.331040   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:39.331633   10589 main.go:141] libmachine: (addons-493204) DBG | no network interface addresses found for domain addons-493204 (source=lease)
	I1018 08:29:39.331660   10589 main.go:141] libmachine: (addons-493204) DBG | trying to list again with source=arp
	I1018 08:29:39.331945   10589 main.go:141] libmachine: (addons-493204) DBG | unable to find current IP address of domain addons-493204 in network mk-addons-493204 (interfaces detected: [])
	I1018 08:29:39.331981   10589 main.go:141] libmachine: (addons-493204) DBG | I1018 08:29:39.331887   10617 retry.go:31] will retry after 1.37383461s: waiting for domain to come up
	I1018 08:29:40.707553   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:40.708126   10589 main.go:141] libmachine: (addons-493204) DBG | no network interface addresses found for domain addons-493204 (source=lease)
	I1018 08:29:40.708146   10589 main.go:141] libmachine: (addons-493204) DBG | trying to list again with source=arp
	I1018 08:29:40.708412   10589 main.go:141] libmachine: (addons-493204) DBG | unable to find current IP address of domain addons-493204 in network mk-addons-493204 (interfaces detected: [])
	I1018 08:29:40.708435   10589 main.go:141] libmachine: (addons-493204) DBG | I1018 08:29:40.708386   10617 retry.go:31] will retry after 1.422068222s: waiting for domain to come up
	I1018 08:29:42.132324   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:42.132994   10589 main.go:141] libmachine: (addons-493204) DBG | no network interface addresses found for domain addons-493204 (source=lease)
	I1018 08:29:42.133045   10589 main.go:141] libmachine: (addons-493204) DBG | trying to list again with source=arp
	I1018 08:29:42.133306   10589 main.go:141] libmachine: (addons-493204) DBG | unable to find current IP address of domain addons-493204 in network mk-addons-493204 (interfaces detected: [])
	I1018 08:29:42.133346   10589 main.go:141] libmachine: (addons-493204) DBG | I1018 08:29:42.133284   10617 retry.go:31] will retry after 1.735631415s: waiting for domain to come up
	I1018 08:29:43.871184   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:43.871947   10589 main.go:141] libmachine: (addons-493204) DBG | no network interface addresses found for domain addons-493204 (source=lease)
	I1018 08:29:43.871982   10589 main.go:141] libmachine: (addons-493204) DBG | trying to list again with source=arp
	I1018 08:29:43.872306   10589 main.go:141] libmachine: (addons-493204) DBG | unable to find current IP address of domain addons-493204 in network mk-addons-493204 (interfaces detected: [])
	I1018 08:29:43.872381   10589 main.go:141] libmachine: (addons-493204) DBG | I1018 08:29:43.872301   10617 retry.go:31] will retry after 2.700570531s: waiting for domain to come up
	I1018 08:29:46.576395   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:46.576963   10589 main.go:141] libmachine: (addons-493204) DBG | no network interface addresses found for domain addons-493204 (source=lease)
	I1018 08:29:46.576984   10589 main.go:141] libmachine: (addons-493204) DBG | trying to list again with source=arp
	I1018 08:29:46.577244   10589 main.go:141] libmachine: (addons-493204) DBG | unable to find current IP address of domain addons-493204 in network mk-addons-493204 (interfaces detected: [])
	I1018 08:29:46.577262   10589 main.go:141] libmachine: (addons-493204) DBG | I1018 08:29:46.577231   10617 retry.go:31] will retry after 3.264999944s: waiting for domain to come up
	I1018 08:29:49.844540   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:49.845189   10589 main.go:141] libmachine: (addons-493204) found domain IP: 192.168.39.58
	I1018 08:29:49.845210   10589 main.go:141] libmachine: (addons-493204) reserving static IP address...
	I1018 08:29:49.845224   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has current primary IP address 192.168.39.58 and MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:49.845701   10589 main.go:141] libmachine: (addons-493204) DBG | unable to find host DHCP lease matching {name: "addons-493204", mac: "52:54:00:48:27:75", ip: "192.168.39.58"} in network mk-addons-493204
	I1018 08:29:50.050590   10589 main.go:141] libmachine: (addons-493204) DBG | Getting to WaitForSSH function...
	I1018 08:29:50.050624   10589 main.go:141] libmachine: (addons-493204) reserved static IP address 192.168.39.58 for domain addons-493204
	I1018 08:29:50.050677   10589 main.go:141] libmachine: (addons-493204) waiting for SSH...
	I1018 08:29:50.053345   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:50.053859   10589 main.go:141] libmachine: (addons-493204) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:27:75", ip: ""} in network mk-addons-493204: {Iface:virbr1 ExpiryTime:2025-10-18 09:29:49 +0000 UTC Type:0 Mac:52:54:00:48:27:75 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:minikube Clientid:01:52:54:00:48:27:75}
	I1018 08:29:50.053883   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined IP address 192.168.39.58 and MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:50.054257   10589 main.go:141] libmachine: (addons-493204) DBG | Using SSH client type: external
	I1018 08:29:50.054279   10589 main.go:141] libmachine: (addons-493204) DBG | Using SSH private key: /home/jenkins/minikube-integration/21767-6063/.minikube/machines/addons-493204/id_rsa (-rw-------)
	I1018 08:29:50.054295   10589 main.go:141] libmachine: (addons-493204) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21767-6063/.minikube/machines/addons-493204/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1018 08:29:50.054336   10589 main.go:141] libmachine: (addons-493204) DBG | About to run SSH command:
	I1018 08:29:50.054353   10589 main.go:141] libmachine: (addons-493204) DBG | exit 0
	I1018 08:29:50.198250   10589 main.go:141] libmachine: (addons-493204) DBG | SSH cmd err, output: <nil>: 
	I1018 08:29:50.198588   10589 main.go:141] libmachine: (addons-493204) domain creation complete
	I1018 08:29:50.198947   10589 main.go:141] libmachine: (addons-493204) Calling .GetConfigRaw
	I1018 08:29:50.199543   10589 main.go:141] libmachine: (addons-493204) Calling .DriverName
	I1018 08:29:50.199745   10589 main.go:141] libmachine: (addons-493204) Calling .DriverName
	I1018 08:29:50.199960   10589 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1018 08:29:50.199978   10589 main.go:141] libmachine: (addons-493204) Calling .GetState
	I1018 08:29:50.201471   10589 main.go:141] libmachine: Detecting operating system of created instance...
	I1018 08:29:50.201484   10589 main.go:141] libmachine: Waiting for SSH to be available...
	I1018 08:29:50.201489   10589 main.go:141] libmachine: Getting to WaitForSSH function...
	I1018 08:29:50.201494   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHHostname
	I1018 08:29:50.204191   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:50.204621   10589 main.go:141] libmachine: (addons-493204) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:27:75", ip: ""} in network mk-addons-493204: {Iface:virbr1 ExpiryTime:2025-10-18 09:29:49 +0000 UTC Type:0 Mac:52:54:00:48:27:75 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-493204 Clientid:01:52:54:00:48:27:75}
	I1018 08:29:50.204637   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined IP address 192.168.39.58 and MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:50.204859   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHPort
	I1018 08:29:50.205068   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHKeyPath
	I1018 08:29:50.205233   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHKeyPath
	I1018 08:29:50.205423   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHUsername
	I1018 08:29:50.205602   10589 main.go:141] libmachine: Using SSH client type: native
	I1018 08:29:50.205851   10589 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1018 08:29:50.205865   10589 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1018 08:29:50.317679   10589 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 08:29:50.317704   10589 main.go:141] libmachine: Detecting the provisioner...
	I1018 08:29:50.317711   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHHostname
	I1018 08:29:50.321012   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:50.321588   10589 main.go:141] libmachine: (addons-493204) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:27:75", ip: ""} in network mk-addons-493204: {Iface:virbr1 ExpiryTime:2025-10-18 09:29:49 +0000 UTC Type:0 Mac:52:54:00:48:27:75 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-493204 Clientid:01:52:54:00:48:27:75}
	I1018 08:29:50.321622   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined IP address 192.168.39.58 and MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:50.321824   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHPort
	I1018 08:29:50.322251   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHKeyPath
	I1018 08:29:50.322542   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHKeyPath
	I1018 08:29:50.322718   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHUsername
	I1018 08:29:50.322941   10589 main.go:141] libmachine: Using SSH client type: native
	I1018 08:29:50.323173   10589 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1018 08:29:50.323197   10589 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1018 08:29:50.436285   10589 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1018 08:29:50.436360   10589 main.go:141] libmachine: found compatible host: buildroot
	I1018 08:29:50.436367   10589 main.go:141] libmachine: Provisioning with buildroot...
	I1018 08:29:50.436374   10589 main.go:141] libmachine: (addons-493204) Calling .GetMachineName
	I1018 08:29:50.436643   10589 buildroot.go:166] provisioning hostname "addons-493204"
	I1018 08:29:50.436671   10589 main.go:141] libmachine: (addons-493204) Calling .GetMachineName
	I1018 08:29:50.436881   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHHostname
	I1018 08:29:50.440155   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:50.440545   10589 main.go:141] libmachine: (addons-493204) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:27:75", ip: ""} in network mk-addons-493204: {Iface:virbr1 ExpiryTime:2025-10-18 09:29:49 +0000 UTC Type:0 Mac:52:54:00:48:27:75 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-493204 Clientid:01:52:54:00:48:27:75}
	I1018 08:29:50.440574   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined IP address 192.168.39.58 and MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:50.440754   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHPort
	I1018 08:29:50.441088   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHKeyPath
	I1018 08:29:50.441301   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHKeyPath
	I1018 08:29:50.441498   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHUsername
	I1018 08:29:50.441671   10589 main.go:141] libmachine: Using SSH client type: native
	I1018 08:29:50.441890   10589 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1018 08:29:50.441906   10589 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-493204 && echo "addons-493204" | sudo tee /etc/hostname
	I1018 08:29:50.576506   10589 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-493204
	
	I1018 08:29:50.576562   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHHostname
	I1018 08:29:50.580010   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:50.580460   10589 main.go:141] libmachine: (addons-493204) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:27:75", ip: ""} in network mk-addons-493204: {Iface:virbr1 ExpiryTime:2025-10-18 09:29:49 +0000 UTC Type:0 Mac:52:54:00:48:27:75 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-493204 Clientid:01:52:54:00:48:27:75}
	I1018 08:29:50.580492   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined IP address 192.168.39.58 and MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:50.580743   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHPort
	I1018 08:29:50.581031   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHKeyPath
	I1018 08:29:50.581248   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHKeyPath
	I1018 08:29:50.581432   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHUsername
	I1018 08:29:50.581619   10589 main.go:141] libmachine: Using SSH client type: native
	I1018 08:29:50.581819   10589 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1018 08:29:50.581835   10589 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-493204' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-493204/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-493204' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 08:29:50.704460   10589 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 08:29:50.704491   10589 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21767-6063/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-6063/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-6063/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-6063/.minikube}
	I1018 08:29:50.704526   10589 buildroot.go:174] setting up certificates
	I1018 08:29:50.704537   10589 provision.go:84] configureAuth start
	I1018 08:29:50.704547   10589 main.go:141] libmachine: (addons-493204) Calling .GetMachineName
	I1018 08:29:50.704946   10589 main.go:141] libmachine: (addons-493204) Calling .GetIP
	I1018 08:29:50.708108   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:50.708477   10589 main.go:141] libmachine: (addons-493204) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:27:75", ip: ""} in network mk-addons-493204: {Iface:virbr1 ExpiryTime:2025-10-18 09:29:49 +0000 UTC Type:0 Mac:52:54:00:48:27:75 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-493204 Clientid:01:52:54:00:48:27:75}
	I1018 08:29:50.708526   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined IP address 192.168.39.58 and MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:50.708680   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHHostname
	I1018 08:29:50.711693   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:50.712143   10589 main.go:141] libmachine: (addons-493204) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:27:75", ip: ""} in network mk-addons-493204: {Iface:virbr1 ExpiryTime:2025-10-18 09:29:49 +0000 UTC Type:0 Mac:52:54:00:48:27:75 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-493204 Clientid:01:52:54:00:48:27:75}
	I1018 08:29:50.712164   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined IP address 192.168.39.58 and MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:50.712353   10589 provision.go:143] copyHostCerts
	I1018 08:29:50.712434   10589 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-6063/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-6063/.minikube/ca.pem (1078 bytes)
	I1018 08:29:50.712588   10589 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-6063/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-6063/.minikube/cert.pem (1123 bytes)
	I1018 08:29:50.712692   10589 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-6063/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-6063/.minikube/key.pem (1675 bytes)
	I1018 08:29:50.712769   10589 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-6063/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-6063/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-6063/.minikube/certs/ca-key.pem org=jenkins.addons-493204 san=[127.0.0.1 192.168.39.58 addons-493204 localhost minikube]
	I1018 08:29:50.748206   10589 provision.go:177] copyRemoteCerts
	I1018 08:29:50.748269   10589 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 08:29:50.748303   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHHostname
	I1018 08:29:50.751643   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:50.751996   10589 main.go:141] libmachine: (addons-493204) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:27:75", ip: ""} in network mk-addons-493204: {Iface:virbr1 ExpiryTime:2025-10-18 09:29:49 +0000 UTC Type:0 Mac:52:54:00:48:27:75 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-493204 Clientid:01:52:54:00:48:27:75}
	I1018 08:29:50.752031   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined IP address 192.168.39.58 and MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:50.752284   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHPort
	I1018 08:29:50.752540   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHKeyPath
	I1018 08:29:50.752718   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHUsername
	I1018 08:29:50.752877   10589 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/addons-493204/id_rsa Username:docker}
	I1018 08:29:50.841869   10589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 08:29:50.878653   10589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 08:29:50.911446   10589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 08:29:50.941823   10589 provision.go:87] duration metric: took 237.272372ms to configureAuth
	I1018 08:29:50.941849   10589 buildroot.go:189] setting minikube options for container-runtime
	I1018 08:29:50.942046   10589 config.go:182] Loaded profile config "addons-493204": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:29:50.942117   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHHostname
	I1018 08:29:50.945611   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:50.946015   10589 main.go:141] libmachine: (addons-493204) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:27:75", ip: ""} in network mk-addons-493204: {Iface:virbr1 ExpiryTime:2025-10-18 09:29:49 +0000 UTC Type:0 Mac:52:54:00:48:27:75 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-493204 Clientid:01:52:54:00:48:27:75}
	I1018 08:29:50.946035   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined IP address 192.168.39.58 and MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:50.946275   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHPort
	I1018 08:29:50.946481   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHKeyPath
	I1018 08:29:50.946736   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHKeyPath
	I1018 08:29:50.946968   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHUsername
	I1018 08:29:50.947252   10589 main.go:141] libmachine: Using SSH client type: native
	I1018 08:29:50.947497   10589 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1018 08:29:50.947514   10589 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 08:29:51.206311   10589 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 08:29:51.206344   10589 main.go:141] libmachine: Checking connection to Docker...
	I1018 08:29:51.206378   10589 main.go:141] libmachine: (addons-493204) Calling .GetURL
	I1018 08:29:51.207911   10589 main.go:141] libmachine: (addons-493204) DBG | using libvirt version 8000000
	I1018 08:29:51.210547   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:51.210855   10589 main.go:141] libmachine: (addons-493204) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:27:75", ip: ""} in network mk-addons-493204: {Iface:virbr1 ExpiryTime:2025-10-18 09:29:49 +0000 UTC Type:0 Mac:52:54:00:48:27:75 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-493204 Clientid:01:52:54:00:48:27:75}
	I1018 08:29:51.210885   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined IP address 192.168.39.58 and MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:51.211090   10589 main.go:141] libmachine: Docker is up and running!
	I1018 08:29:51.211109   10589 main.go:141] libmachine: Reticulating splines...
	I1018 08:29:51.211118   10589 client.go:171] duration metric: took 18.178048272s to LocalClient.Create
	I1018 08:29:51.211139   10589 start.go:167] duration metric: took 18.178122362s to libmachine.API.Create "addons-493204"
	I1018 08:29:51.211149   10589 start.go:293] postStartSetup for "addons-493204" (driver="kvm2")
	I1018 08:29:51.211158   10589 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 08:29:51.211173   10589 main.go:141] libmachine: (addons-493204) Calling .DriverName
	I1018 08:29:51.211422   10589 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 08:29:51.211448   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHHostname
	I1018 08:29:51.214192   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:51.214573   10589 main.go:141] libmachine: (addons-493204) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:27:75", ip: ""} in network mk-addons-493204: {Iface:virbr1 ExpiryTime:2025-10-18 09:29:49 +0000 UTC Type:0 Mac:52:54:00:48:27:75 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-493204 Clientid:01:52:54:00:48:27:75}
	I1018 08:29:51.214601   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined IP address 192.168.39.58 and MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:51.214823   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHPort
	I1018 08:29:51.215093   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHKeyPath
	I1018 08:29:51.215274   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHUsername
	I1018 08:29:51.215587   10589 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/addons-493204/id_rsa Username:docker}
	I1018 08:29:51.303163   10589 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 08:29:51.308317   10589 info.go:137] Remote host: Buildroot 2025.02
	I1018 08:29:51.308343   10589 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-6063/.minikube/addons for local assets ...
	I1018 08:29:51.308445   10589 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-6063/.minikube/files for local assets ...
	I1018 08:29:51.308475   10589 start.go:296] duration metric: took 97.320007ms for postStartSetup
	I1018 08:29:51.308521   10589 main.go:141] libmachine: (addons-493204) Calling .GetConfigRaw
	I1018 08:29:51.309095   10589 main.go:141] libmachine: (addons-493204) Calling .GetIP
	I1018 08:29:51.312100   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:51.312544   10589 main.go:141] libmachine: (addons-493204) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:27:75", ip: ""} in network mk-addons-493204: {Iface:virbr1 ExpiryTime:2025-10-18 09:29:49 +0000 UTC Type:0 Mac:52:54:00:48:27:75 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-493204 Clientid:01:52:54:00:48:27:75}
	I1018 08:29:51.312571   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined IP address 192.168.39.58 and MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:51.312858   10589 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/config.json ...
	I1018 08:29:51.313097   10589 start.go:128] duration metric: took 18.297813076s to createHost
	I1018 08:29:51.313122   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHHostname
	I1018 08:29:51.315621   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:51.316155   10589 main.go:141] libmachine: (addons-493204) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:27:75", ip: ""} in network mk-addons-493204: {Iface:virbr1 ExpiryTime:2025-10-18 09:29:49 +0000 UTC Type:0 Mac:52:54:00:48:27:75 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-493204 Clientid:01:52:54:00:48:27:75}
	I1018 08:29:51.316209   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined IP address 192.168.39.58 and MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:51.316390   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHPort
	I1018 08:29:51.316619   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHKeyPath
	I1018 08:29:51.316786   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHKeyPath
	I1018 08:29:51.316961   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHUsername
	I1018 08:29:51.317236   10589 main.go:141] libmachine: Using SSH client type: native
	I1018 08:29:51.317479   10589 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1018 08:29:51.317494   10589 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1018 08:29:51.430281   10589 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760776191.388031912
	
	I1018 08:29:51.430304   10589 fix.go:216] guest clock: 1760776191.388031912
	I1018 08:29:51.430312   10589 fix.go:229] Guest: 2025-10-18 08:29:51.388031912 +0000 UTC Remote: 2025-10-18 08:29:51.313110074 +0000 UTC m=+18.420402119 (delta=74.921838ms)
	I1018 08:29:51.430331   10589 fix.go:200] guest clock delta is within tolerance: 74.921838ms
	I1018 08:29:51.430335   10589 start.go:83] releasing machines lock for "addons-493204", held for 18.415134265s
	I1018 08:29:51.430354   10589 main.go:141] libmachine: (addons-493204) Calling .DriverName
	I1018 08:29:51.430631   10589 main.go:141] libmachine: (addons-493204) Calling .GetIP
	I1018 08:29:51.434046   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:51.434516   10589 main.go:141] libmachine: (addons-493204) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:27:75", ip: ""} in network mk-addons-493204: {Iface:virbr1 ExpiryTime:2025-10-18 09:29:49 +0000 UTC Type:0 Mac:52:54:00:48:27:75 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-493204 Clientid:01:52:54:00:48:27:75}
	I1018 08:29:51.434565   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined IP address 192.168.39.58 and MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:51.434755   10589 main.go:141] libmachine: (addons-493204) Calling .DriverName
	I1018 08:29:51.435420   10589 main.go:141] libmachine: (addons-493204) Calling .DriverName
	I1018 08:29:51.435611   10589 main.go:141] libmachine: (addons-493204) Calling .DriverName
	I1018 08:29:51.435735   10589 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 08:29:51.435780   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHHostname
	I1018 08:29:51.435890   10589 ssh_runner.go:195] Run: cat /version.json
	I1018 08:29:51.435939   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHHostname
	I1018 08:29:51.439286   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:51.439548   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:51.439733   10589 main.go:141] libmachine: (addons-493204) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:27:75", ip: ""} in network mk-addons-493204: {Iface:virbr1 ExpiryTime:2025-10-18 09:29:49 +0000 UTC Type:0 Mac:52:54:00:48:27:75 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-493204 Clientid:01:52:54:00:48:27:75}
	I1018 08:29:51.439768   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined IP address 192.168.39.58 and MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:51.439938   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHPort
	I1018 08:29:51.440070   10589 main.go:141] libmachine: (addons-493204) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:27:75", ip: ""} in network mk-addons-493204: {Iface:virbr1 ExpiryTime:2025-10-18 09:29:49 +0000 UTC Type:0 Mac:52:54:00:48:27:75 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-493204 Clientid:01:52:54:00:48:27:75}
	I1018 08:29:51.440099   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined IP address 192.168.39.58 and MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:51.440146   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHKeyPath
	I1018 08:29:51.440256   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHPort
	I1018 08:29:51.440317   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHUsername
	I1018 08:29:51.440409   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHKeyPath
	I1018 08:29:51.440465   10589 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/addons-493204/id_rsa Username:docker}
	I1018 08:29:51.440571   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHUsername
	I1018 08:29:51.440727   10589 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/addons-493204/id_rsa Username:docker}
	I1018 08:29:51.548787   10589 ssh_runner.go:195] Run: systemctl --version
	I1018 08:29:51.555581   10589 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 08:29:51.720112   10589 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 08:29:51.727719   10589 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 08:29:51.727803   10589 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 08:29:51.749720   10589 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 08:29:51.749746   10589 start.go:495] detecting cgroup driver to use...
	I1018 08:29:51.749810   10589 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 08:29:51.775418   10589 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 08:29:51.795571   10589 docker.go:218] disabling cri-docker service (if available) ...
	I1018 08:29:51.795628   10589 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 08:29:51.815779   10589 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 08:29:51.834621   10589 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 08:29:51.997399   10589 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 08:29:52.217256   10589 docker.go:234] disabling docker service ...
	I1018 08:29:52.217335   10589 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 08:29:52.235894   10589 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 08:29:52.252277   10589 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 08:29:52.419086   10589 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 08:29:52.559859   10589 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 08:29:52.575915   10589 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 08:29:52.600120   10589 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 08:29:52.600175   10589 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:29:52.613121   10589 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 08:29:52.613197   10589 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:29:52.626440   10589 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:29:52.639273   10589 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:29:52.652300   10589 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 08:29:52.667288   10589 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:29:52.679906   10589 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:29:52.702036   10589 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:29:52.716367   10589 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 08:29:52.728353   10589 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1018 08:29:52.728414   10589 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1018 08:29:52.748519   10589 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 08:29:52.760567   10589 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 08:29:52.901142   10589 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 08:29:53.014425   10589 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 08:29:53.014512   10589 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 08:29:53.020131   10589 start.go:563] Will wait 60s for crictl version
	I1018 08:29:53.020203   10589 ssh_runner.go:195] Run: which crictl
	I1018 08:29:53.024516   10589 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1018 08:29:53.065794   10589 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1018 08:29:53.066128   10589 ssh_runner.go:195] Run: crio --version
	I1018 08:29:53.096748   10589 ssh_runner.go:195] Run: crio --version
	I1018 08:29:53.127281   10589 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1018 08:29:53.128916   10589 main.go:141] libmachine: (addons-493204) Calling .GetIP
	I1018 08:29:53.132266   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:53.132655   10589 main.go:141] libmachine: (addons-493204) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:27:75", ip: ""} in network mk-addons-493204: {Iface:virbr1 ExpiryTime:2025-10-18 09:29:49 +0000 UTC Type:0 Mac:52:54:00:48:27:75 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-493204 Clientid:01:52:54:00:48:27:75}
	I1018 08:29:53.132684   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined IP address 192.168.39.58 and MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:29:53.133018   10589 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1018 08:29:53.138094   10589 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 08:29:53.153892   10589 kubeadm.go:883] updating cluster {Name:addons-493204 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-493204 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 08:29:53.154053   10589 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 08:29:53.154112   10589 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 08:29:53.192652   10589 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1018 08:29:53.192719   10589 ssh_runner.go:195] Run: which lz4
	I1018 08:29:53.197101   10589 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1018 08:29:53.202206   10589 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1018 08:29:53.202242   10589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1018 08:29:54.706436   10589 crio.go:462] duration metric: took 1.509373009s to copy over tarball
	I1018 08:29:54.706526   10589 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1018 08:29:56.389275   10589 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.68271681s)
	I1018 08:29:56.389303   10589 crio.go:469] duration metric: took 1.682835691s to extract the tarball
	I1018 08:29:56.389311   10589 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1018 08:29:56.431303   10589 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 08:29:56.477750   10589 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 08:29:56.477775   10589 cache_images.go:85] Images are preloaded, skipping loading
	I1018 08:29:56.477781   10589 kubeadm.go:934] updating node { 192.168.39.58 8443 v1.34.1 crio true true} ...
	I1018 08:29:56.477883   10589 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-493204 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-493204 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 08:29:56.477978   10589 ssh_runner.go:195] Run: crio config
	I1018 08:29:56.527328   10589 cni.go:84] Creating CNI manager for ""
	I1018 08:29:56.527365   10589 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 08:29:56.527390   10589 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 08:29:56.527421   10589 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.58 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-493204 NodeName:addons-493204 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 08:29:56.527546   10589 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-493204"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.58"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.58"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 08:29:56.527610   10589 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 08:29:56.540367   10589 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 08:29:56.540462   10589 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 08:29:56.552942   10589 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1018 08:29:56.574750   10589 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 08:29:56.596736   10589 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1018 08:29:56.620644   10589 ssh_runner.go:195] Run: grep 192.168.39.58	control-plane.minikube.internal$ /etc/hosts
	I1018 08:29:56.625339   10589 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 08:29:56.641358   10589 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 08:29:56.784447   10589 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 08:29:56.819305   10589 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204 for IP: 192.168.39.58
	I1018 08:29:56.819331   10589 certs.go:195] generating shared ca certs ...
	I1018 08:29:56.819354   10589 certs.go:227] acquiring lock for ca certs: {Name:mk72b8eadb27773dc6399bddc4b95ee0664cbf67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:29:56.819526   10589 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-6063/.minikube/ca.key
	I1018 08:29:57.005716   10589 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-6063/.minikube/ca.crt ...
	I1018 08:29:57.005755   10589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-6063/.minikube/ca.crt: {Name:mkf7244e2b49e11dcd447a9018f17595f1c9208a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:29:57.005970   10589 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-6063/.minikube/ca.key ...
	I1018 08:29:57.005987   10589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-6063/.minikube/ca.key: {Name:mka2de5ada6e85278049945c19b5da2e7da29b53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:29:57.006097   10589 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-6063/.minikube/proxy-client-ca.key
	I1018 08:29:57.405500   10589 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-6063/.minikube/proxy-client-ca.crt ...
	I1018 08:29:57.405528   10589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-6063/.minikube/proxy-client-ca.crt: {Name:mk2eb73708e729d269bf8bd2da6573a9732961fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:29:57.405700   10589 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-6063/.minikube/proxy-client-ca.key ...
	I1018 08:29:57.405720   10589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-6063/.minikube/proxy-client-ca.key: {Name:mkf0bdfc15a502c18808f241d4ce112d9ba9edba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:29:57.405826   10589 certs.go:257] generating profile certs ...
	I1018 08:29:57.405932   10589 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/client.key
	I1018 08:29:57.405971   10589 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/client.crt with IP's: []
	I1018 08:29:57.533373   10589 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/client.crt ...
	I1018 08:29:57.533406   10589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/client.crt: {Name:mkc1bdbb9f84e6f7585fa02ad61e94b9982d9cf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:29:57.533605   10589 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/client.key ...
	I1018 08:29:57.533623   10589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/client.key: {Name:mk531442783e7aa06aea727e8d0357caa8efbb47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:29:57.533744   10589 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/apiserver.key.97936807
	I1018 08:29:57.533775   10589 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/apiserver.crt.97936807 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.58]
	I1018 08:29:57.654576   10589 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/apiserver.crt.97936807 ...
	I1018 08:29:57.654608   10589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/apiserver.crt.97936807: {Name:mk434ad1d60dd50f42453c3ab69bb6cc9d864bd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:29:57.654782   10589 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/apiserver.key.97936807 ...
	I1018 08:29:57.654803   10589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/apiserver.key.97936807: {Name:mk2f6e60d6398f52781426b3afbf289b941a27dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:29:57.654940   10589 certs.go:382] copying /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/apiserver.crt.97936807 -> /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/apiserver.crt
	I1018 08:29:57.655094   10589 certs.go:386] copying /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/apiserver.key.97936807 -> /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/apiserver.key
	I1018 08:29:57.655181   10589 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/proxy-client.key
	I1018 08:29:57.655208   10589 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/proxy-client.crt with IP's: []
	I1018 08:29:58.066899   10589 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/proxy-client.crt ...
	I1018 08:29:58.066940   10589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/proxy-client.crt: {Name:mkc2dfa503807d491a3e50fbb26dcdf7cb782c37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:29:58.067152   10589 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/proxy-client.key ...
	I1018 08:29:58.067169   10589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/proxy-client.key: {Name:mke2c7510913f0fcb7dfbc1370373101098c8f85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:29:58.067376   10589 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-6063/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 08:29:58.067413   10589 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-6063/.minikube/certs/ca.pem (1078 bytes)
	I1018 08:29:58.067440   10589 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-6063/.minikube/certs/cert.pem (1123 bytes)
	I1018 08:29:58.067470   10589 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-6063/.minikube/certs/key.pem (1675 bytes)
	I1018 08:29:58.068012   10589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 08:29:58.101483   10589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 08:29:58.134294   10589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 08:29:58.167881   10589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 08:29:58.199939   10589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 08:29:58.232912   10589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 08:29:58.265138   10589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 08:29:58.298564   10589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 08:29:58.331717   10589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 08:29:58.363792   10589 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 08:29:58.386971   10589 ssh_runner.go:195] Run: openssl version
	I1018 08:29:58.393999   10589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 08:29:58.413811   10589 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 08:29:58.422021   10589 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 08:29:58.422092   10589 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 08:29:58.431182   10589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 08:29:58.447237   10589 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 08:29:58.455035   10589 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 08:29:58.455105   10589 kubeadm.go:400] StartCluster: {Name:addons-493204 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-493204 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 08:29:58.455233   10589 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 08:29:58.455298   10589 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 08:29:58.503057   10589 cri.go:89] found id: ""
	I1018 08:29:58.503151   10589 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 08:29:58.516628   10589 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 08:29:58.529620   10589 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 08:29:58.542772   10589 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 08:29:58.542797   10589 kubeadm.go:157] found existing configuration files:
	
	I1018 08:29:58.542857   10589 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 08:29:58.555111   10589 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 08:29:58.555169   10589 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 08:29:58.567810   10589 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 08:29:58.579643   10589 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 08:29:58.579705   10589 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 08:29:58.592854   10589 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 08:29:58.606115   10589 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 08:29:58.606175   10589 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 08:29:58.619165   10589 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 08:29:58.631955   10589 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 08:29:58.632077   10589 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 08:29:58.644992   10589 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1018 08:29:58.697392   10589 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 08:29:58.697467   10589 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 08:29:58.797194   10589 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 08:29:58.797316   10589 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 08:29:58.797434   10589 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 08:29:58.808609   10589 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 08:29:59.021188   10589 out.go:252]   - Generating certificates and keys ...
	I1018 08:29:59.021324   10589 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 08:29:59.021410   10589 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 08:29:59.021511   10589 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 08:29:59.377778   10589 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 08:29:59.639200   10589 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 08:29:59.747843   10589 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 08:29:59.943963   10589 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 08:29:59.944145   10589 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-493204 localhost] and IPs [192.168.39.58 127.0.0.1 ::1]
	I1018 08:30:00.032986   10589 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 08:30:00.033171   10589 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-493204 localhost] and IPs [192.168.39.58 127.0.0.1 ::1]
	I1018 08:30:00.321908   10589 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 08:30:00.713843   10589 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 08:30:01.327134   10589 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 08:30:01.327241   10589 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 08:30:01.724441   10589 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 08:30:02.481192   10589 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 08:30:02.659479   10589 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 08:30:02.903671   10589 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 08:30:03.318460   10589 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 08:30:03.319193   10589 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 08:30:03.321606   10589 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 08:30:03.332857   10589 out.go:252]   - Booting up control plane ...
	I1018 08:30:03.333056   10589 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 08:30:03.333138   10589 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 08:30:03.333197   10589 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 08:30:03.344606   10589 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 08:30:03.344707   10589 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 08:30:03.353671   10589 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 08:30:03.354226   10589 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 08:30:03.354277   10589 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 08:30:03.532167   10589 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 08:30:03.532742   10589 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 08:30:04.533161   10589 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001541247s
	I1018 08:30:04.538179   10589 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 08:30:04.538328   10589 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.58:8443/livez
	I1018 08:30:04.539492   10589 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 08:30:04.539590   10589 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 08:30:07.110528   10589 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.57354295s
	I1018 08:30:08.225532   10589 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.689854111s
	I1018 08:30:10.540763   10589 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.005709877s
	I1018 08:30:10.560965   10589 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 08:30:10.581265   10589 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 08:30:10.596106   10589 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 08:30:10.596356   10589 kubeadm.go:318] [mark-control-plane] Marking the node addons-493204 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 08:30:10.615654   10589 kubeadm.go:318] [bootstrap-token] Using token: qahj23.lhmr5nhqg6shkezw
	I1018 08:30:10.617537   10589 out.go:252]   - Configuring RBAC rules ...
	I1018 08:30:10.617683   10589 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 08:30:10.623971   10589 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 08:30:10.632083   10589 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 08:30:10.636424   10589 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 08:30:10.648184   10589 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 08:30:10.657073   10589 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 08:30:10.952523   10589 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 08:30:11.408992   10589 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 08:30:11.948620   10589 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 08:30:11.951217   10589 kubeadm.go:318] 
	I1018 08:30:11.951308   10589 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 08:30:11.951321   10589 kubeadm.go:318] 
	I1018 08:30:11.951419   10589 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 08:30:11.951429   10589 kubeadm.go:318] 
	I1018 08:30:11.951476   10589 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 08:30:11.951582   10589 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 08:30:11.951696   10589 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 08:30:11.951715   10589 kubeadm.go:318] 
	I1018 08:30:11.951783   10589 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 08:30:11.951807   10589 kubeadm.go:318] 
	I1018 08:30:11.951847   10589 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 08:30:11.951853   10589 kubeadm.go:318] 
	I1018 08:30:11.951899   10589 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 08:30:11.951984   10589 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 08:30:11.952041   10589 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 08:30:11.952047   10589 kubeadm.go:318] 
	I1018 08:30:11.952127   10589 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 08:30:11.952221   10589 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 08:30:11.952243   10589 kubeadm.go:318] 
	I1018 08:30:11.952375   10589 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token qahj23.lhmr5nhqg6shkezw \
	I1018 08:30:11.952504   10589 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c4d60fb4a1ceaafe1b1d4013b4f6ceb431304abfc1a8d1095fcadfbdc8e3b7b9 \
	I1018 08:30:11.952537   10589 kubeadm.go:318] 	--control-plane 
	I1018 08:30:11.952544   10589 kubeadm.go:318] 
	I1018 08:30:11.952663   10589 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 08:30:11.952673   10589 kubeadm.go:318] 
	I1018 08:30:11.952784   10589 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token qahj23.lhmr5nhqg6shkezw \
	I1018 08:30:11.952950   10589 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c4d60fb4a1ceaafe1b1d4013b4f6ceb431304abfc1a8d1095fcadfbdc8e3b7b9 
	I1018 08:30:11.955322   10589 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 08:30:11.955363   10589 cni.go:84] Creating CNI manager for ""
	I1018 08:30:11.955374   10589 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 08:30:11.958172   10589 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1018 08:30:11.959941   10589 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1018 08:30:11.980161   10589 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1018 08:30:12.013353   10589 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 08:30:12.013468   10589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:12.013506   10589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-493204 minikube.k8s.io/updated_at=2025_10_18T08_30_12_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820 minikube.k8s.io/name=addons-493204 minikube.k8s.io/primary=true
	I1018 08:30:12.042523   10589 ops.go:34] apiserver oom_adj: -16
	I1018 08:30:12.176229   10589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:12.676400   10589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:13.176900   10589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:13.676418   10589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:14.176746   10589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:14.676457   10589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:15.176872   10589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:15.676639   10589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:15.758226   10589 kubeadm.go:1113] duration metric: took 3.744833221s to wait for elevateKubeSystemPrivileges
	I1018 08:30:15.758270   10589 kubeadm.go:402] duration metric: took 17.303170815s to StartCluster
	I1018 08:30:15.758293   10589 settings.go:142] acquiring lock: {Name:mk5c51ba919dd454ddb697f518b92637a3560487 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:30:15.758437   10589 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-6063/kubeconfig
	I1018 08:30:15.758813   10589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-6063/kubeconfig: {Name:mkb340db398364bcc27d468da7444ccfad7b82c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:30:15.759090   10589 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 08:30:15.759114   10589 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1018 08:30:15.759109   10589 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 08:30:15.759251   10589 addons.go:69] Setting yakd=true in profile "addons-493204"
	I1018 08:30:15.759273   10589 addons.go:238] Setting addon yakd=true in "addons-493204"
	I1018 08:30:15.759292   10589 addons.go:69] Setting default-storageclass=true in profile "addons-493204"
	I1018 08:30:15.759302   10589 host.go:66] Checking if "addons-493204" exists ...
	I1018 08:30:15.759296   10589 addons.go:69] Setting inspektor-gadget=true in profile "addons-493204"
	I1018 08:30:15.759317   10589 config.go:182] Loaded profile config "addons-493204": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:30:15.759320   10589 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-493204"
	I1018 08:30:15.759322   10589 addons.go:69] Setting registry-creds=true in profile "addons-493204"
	I1018 08:30:15.759318   10589 addons.go:69] Setting cloud-spanner=true in profile "addons-493204"
	I1018 08:30:15.759331   10589 addons.go:238] Setting addon inspektor-gadget=true in "addons-493204"
	I1018 08:30:15.759353   10589 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-493204"
	I1018 08:30:15.759357   10589 addons.go:69] Setting ingress-dns=true in profile "addons-493204"
	I1018 08:30:15.759362   10589 addons.go:238] Setting addon cloud-spanner=true in "addons-493204"
	I1018 08:30:15.759368   10589 addons.go:238] Setting addon ingress-dns=true in "addons-493204"
	I1018 08:30:15.759367   10589 addons.go:69] Setting ingress=true in profile "addons-493204"
	I1018 08:30:15.759377   10589 host.go:66] Checking if "addons-493204" exists ...
	I1018 08:30:15.759384   10589 addons.go:238] Setting addon ingress=true in "addons-493204"
	I1018 08:30:15.759385   10589 addons.go:69] Setting metrics-server=true in profile "addons-493204"
	I1018 08:30:15.759391   10589 host.go:66] Checking if "addons-493204" exists ...
	I1018 08:30:15.759392   10589 host.go:66] Checking if "addons-493204" exists ...
	I1018 08:30:15.759399   10589 addons.go:238] Setting addon metrics-server=true in "addons-493204"
	I1018 08:30:15.759416   10589 host.go:66] Checking if "addons-493204" exists ...
	I1018 08:30:15.759422   10589 host.go:66] Checking if "addons-493204" exists ...
	I1018 08:30:15.759537   10589 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-493204"
	I1018 08:30:15.759568   10589 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-493204"
	I1018 08:30:15.759599   10589 host.go:66] Checking if "addons-493204" exists ...
	I1018 08:30:15.759766   10589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:30:15.759788   10589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:30:15.759810   10589 addons.go:69] Setting registry=true in profile "addons-493204"
	I1018 08:30:15.759802   10589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:30:15.759823   10589 addons.go:238] Setting addon registry=true in "addons-493204"
	I1018 08:30:15.759833   10589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:30:15.759838   10589 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-493204"
	I1018 08:30:15.759342   10589 addons.go:69] Setting gcp-auth=true in profile "addons-493204"
	I1018 08:30:15.759847   10589 addons.go:69] Setting volumesnapshots=true in profile "addons-493204"
	I1018 08:30:15.759852   10589 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-493204"
	I1018 08:30:15.759854   10589 host.go:66] Checking if "addons-493204" exists ...
	I1018 08:30:15.759856   10589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:30:15.759860   10589 addons.go:238] Setting addon volumesnapshots=true in "addons-493204"
	I1018 08:30:15.759824   10589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:30:15.759872   10589 host.go:66] Checking if "addons-493204" exists ...
	I1018 08:30:15.759877   10589 host.go:66] Checking if "addons-493204" exists ...
	I1018 08:30:15.759880   10589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:30:15.759908   10589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:30:15.759982   10589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:30:15.760006   10589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:30:15.759311   10589 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-493204"
	I1018 08:30:15.759348   10589 addons.go:238] Setting addon registry-creds=true in "addons-493204"
	I1018 08:30:15.760117   10589 host.go:66] Checking if "addons-493204" exists ...
	I1018 08:30:15.760238   10589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:30:15.760241   10589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:30:15.759840   10589 addons.go:69] Setting volcano=true in profile "addons-493204"
	I1018 08:30:15.760251   10589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:30:15.760262   10589 addons.go:238] Setting addon volcano=true in "addons-493204"
	I1018 08:30:15.759862   10589 mustload.go:65] Loading cluster: addons-493204
	I1018 08:30:15.760277   10589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:30:15.760288   10589 host.go:66] Checking if "addons-493204" exists ...
	I1018 08:30:15.760412   10589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:30:15.760415   10589 config.go:182] Loaded profile config "addons-493204": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:30:15.760443   10589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:30:15.760536   10589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:30:15.759831   10589 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-493204"
	I1018 08:30:15.760761   10589 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-493204"
	I1018 08:30:15.760882   10589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:30:15.759810   10589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:30:15.759377   10589 host.go:66] Checking if "addons-493204" exists ...
	I1018 08:30:15.761674   10589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:30:15.761702   10589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:30:15.759822   10589 addons.go:69] Setting storage-provisioner=true in profile "addons-493204"
	I1018 08:30:15.761875   10589 addons.go:238] Setting addon storage-provisioner=true in "addons-493204"
	I1018 08:30:15.761903   10589 host.go:66] Checking if "addons-493204" exists ...
	I1018 08:30:15.760263   10589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:30:15.760604   10589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:30:15.762395   10589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:30:15.760705   10589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:30:15.762629   10589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:30:15.759842   10589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:30:15.759823   10589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:30:15.762992   10589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:30:15.760743   10589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:30:15.767086   10589 out.go:179] * Verifying Kubernetes components...
	I1018 08:30:15.769123   10589 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 08:30:15.776977   10589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:30:15.777042   10589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:30:15.778271   10589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:30:15.778559   10589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:30:15.792214   10589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41351
	I1018 08:30:15.793407   10589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36939
	I1018 08:30:15.794671   10589 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:30:15.795330   10589 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:30:15.795451   10589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36999
	I1018 08:30:15.795659   10589 main.go:141] libmachine: Using API Version  1
	I1018 08:30:15.795797   10589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:30:15.796424   10589 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:30:15.797103   10589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:30:15.797209   10589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:30:15.797312   10589 main.go:141] libmachine: Using API Version  1
	I1018 08:30:15.797338   10589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:30:15.797777   10589 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:30:15.798424   10589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:30:15.798465   10589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:30:15.798745   10589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34355
	I1018 08:30:15.799048   10589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37509
	I1018 08:30:15.799549   10589 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:30:15.799956   10589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35187
	I1018 08:30:15.800429   10589 main.go:141] libmachine: Using API Version  1
	I1018 08:30:15.800458   10589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:30:15.804735   10589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42853
	I1018 08:30:15.805325   10589 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:30:15.805419   10589 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:30:15.805457   10589 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:30:15.805497   10589 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:30:15.805914   10589 main.go:141] libmachine: Using API Version  1
	I1018 08:30:15.805941   10589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:30:15.805982   10589 main.go:141] libmachine: Using API Version  1
	I1018 08:30:15.805992   10589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:30:15.806053   10589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35009
	I1018 08:30:15.806133   10589 main.go:141] libmachine: Using API Version  1
	I1018 08:30:15.806154   10589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:30:15.806397   10589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:30:15.806431   10589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:30:15.809834   10589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39655
	I1018 08:30:15.809849   10589 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:30:15.809872   10589 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:30:15.810501   10589 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:30:15.811042   10589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:30:15.811078   10589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:30:15.811294   10589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36769
	I1018 08:30:15.811542   10589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:30:15.811589   10589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:30:15.811791   10589 main.go:141] libmachine: Using API Version  1
	I1018 08:30:15.811808   10589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:30:15.812229   10589 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:30:15.812299   10589 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:30:15.812938   10589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:30:15.812979   10589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:30:15.813212   10589 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:30:15.813739   10589 main.go:141] libmachine: Using API Version  1
	I1018 08:30:15.813752   10589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:30:15.814109   10589 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:30:15.815355   10589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:30:15.815581   10589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:30:15.818007   10589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43683
	I1018 08:30:15.818607   10589 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:30:15.818609   10589 main.go:141] libmachine: (addons-493204) Calling .GetState
	I1018 08:30:15.819268   10589 main.go:141] libmachine: Using API Version  1
	I1018 08:30:15.819288   10589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:30:15.819374   10589 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:30:15.819891   10589 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:30:15.819903   10589 main.go:141] libmachine: Using API Version  1
	I1018 08:30:15.819960   10589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:30:15.823338   10589 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:30:15.823431   10589 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:30:15.823836   10589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:30:15.824687   10589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:30:15.824716   10589 main.go:141] libmachine: Using API Version  1
	I1018 08:30:15.824745   10589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:30:15.825396   10589 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:30:15.825556   10589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:30:15.825595   10589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:30:15.825958   10589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:30:15.826010   10589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:30:15.828724   10589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35889
	I1018 08:30:15.828739   10589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45619
	I1018 08:30:15.832068   10589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33415
	I1018 08:30:15.832944   10589 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:30:15.833524   10589 main.go:141] libmachine: Using API Version  1
	I1018 08:30:15.833544   10589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:30:15.834096   10589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46629
	I1018 08:30:15.834175   10589 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:30:15.834884   10589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:30:15.835014   10589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:30:15.835726   10589 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:30:15.836360   10589 main.go:141] libmachine: Using API Version  1
	I1018 08:30:15.836381   10589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:30:15.836752   10589 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:30:15.837352   10589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:30:15.837388   10589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:30:15.837737   10589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41845
	I1018 08:30:15.840017   10589 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:30:15.840789   10589 main.go:141] libmachine: Using API Version  1
	I1018 08:30:15.840965   10589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:30:15.841666   10589 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:30:15.842438   10589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:30:15.842483   10589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:30:15.842968   10589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42711
	I1018 08:30:15.846606   10589 addons.go:238] Setting addon default-storageclass=true in "addons-493204"
	I1018 08:30:15.846655   10589 host.go:66] Checking if "addons-493204" exists ...
	I1018 08:30:15.847054   10589 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:30:15.847065   10589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37119
	I1018 08:30:15.847070   10589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37253
	I1018 08:30:15.851469   10589 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:30:15.851539   10589 main.go:141] libmachine: Using API Version  1
	I1018 08:30:15.851558   10589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:30:15.851637   10589 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:30:15.851683   10589 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:30:15.852120   10589 main.go:141] libmachine: Using API Version  1
	I1018 08:30:15.852135   10589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:30:15.852150   10589 main.go:141] libmachine: Using API Version  1
	I1018 08:30:15.852155   10589 main.go:141] libmachine: Using API Version  1
	I1018 08:30:15.852165   10589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:30:15.852168   10589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:30:15.852570   10589 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:30:15.852749   10589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:30:15.852781   10589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:30:15.853142   10589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:30:15.853175   10589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:30:15.853730   10589 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:30:15.853782   10589 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:30:15.853807   10589 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:30:15.854311   10589 main.go:141] libmachine: (addons-493204) Calling .GetState
	I1018 08:30:15.854515   10589 main.go:141] libmachine: (addons-493204) Calling .GetState
	I1018 08:30:15.856405   10589 main.go:141] libmachine: (addons-493204) Calling .GetState
	I1018 08:30:15.857370   10589 main.go:141] libmachine: (addons-493204) Calling .DriverName
	I1018 08:30:15.857959   10589 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:30:15.858495   10589 main.go:141] libmachine: Using API Version  1
	I1018 08:30:15.858520   10589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:30:15.858960   10589 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:30:15.859717   10589 main.go:141] libmachine: (addons-493204) Calling .DriverName
	I1018 08:30:15.859947   10589 host.go:66] Checking if "addons-493204" exists ...
	I1018 08:30:15.860338   10589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:30:15.860384   10589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:30:15.860666   10589 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1018 08:30:15.862347   10589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39583
	I1018 08:30:15.862423   10589 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1018 08:30:15.862540   10589 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 08:30:15.862553   10589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1018 08:30:15.862574   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHHostname
	I1018 08:30:15.863515   10589 main.go:141] libmachine: (addons-493204) Calling .GetState
	I1018 08:30:15.865546   10589 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1018 08:30:15.867327   10589 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1018 08:30:15.868471   10589 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-493204"
	I1018 08:30:15.868513   10589 host.go:66] Checking if "addons-493204" exists ...
	I1018 08:30:15.868899   10589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:30:15.869049   10589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:30:15.870931   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:30:15.871374   10589 main.go:141] libmachine: (addons-493204) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:27:75", ip: ""} in network mk-addons-493204: {Iface:virbr1 ExpiryTime:2025-10-18 09:29:49 +0000 UTC Type:0 Mac:52:54:00:48:27:75 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-493204 Clientid:01:52:54:00:48:27:75}
	I1018 08:30:15.871422   10589 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1018 08:30:15.871426   10589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42311
	I1018 08:30:15.871856   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined IP address 192.168.39.58 and MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:30:15.872031   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHPort
	I1018 08:30:15.872725   10589 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:30:15.872826   10589 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:30:15.872893   10589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33677
	I1018 08:30:15.873135   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHKeyPath
	I1018 08:30:15.873295   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHUsername
	I1018 08:30:15.873412   10589 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/addons-493204/id_rsa Username:docker}
	I1018 08:30:15.874223   10589 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:30:15.874321   10589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38379
	I1018 08:30:15.874617   10589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46471
	I1018 08:30:15.874735   10589 main.go:141] libmachine: Using API Version  1
	I1018 08:30:15.874749   10589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:30:15.875426   10589 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:30:15.875587   10589 main.go:141] libmachine: Using API Version  1
	I1018 08:30:15.875612   10589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:30:15.875600   10589 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:30:15.875660   10589 main.go:141] libmachine: Using API Version  1
	I1018 08:30:15.875679   10589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:30:15.876201   10589 main.go:141] libmachine: (addons-493204) Calling .GetState
	I1018 08:30:15.876259   10589 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:30:15.876283   10589 main.go:141] libmachine: Using API Version  1
	I1018 08:30:15.876298   10589 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:30:15.876299   10589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:30:15.876471   10589 main.go:141] libmachine: (addons-493204) Calling .GetState
	I1018 08:30:15.877325   10589 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:30:15.877684   10589 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:30:15.879084   10589 main.go:141] libmachine: (addons-493204) Calling .GetState
	I1018 08:30:15.879733   10589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41745
	I1018 08:30:15.880244   10589 main.go:141] libmachine: (addons-493204) Calling .DriverName
	I1018 08:30:15.882715   10589 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1018 08:30:15.883086   10589 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 08:30:15.883419   10589 main.go:141] libmachine: (addons-493204) Calling .DriverName
	I1018 08:30:15.884041   10589 main.go:141] libmachine: Using API Version  1
	I1018 08:30:15.884068   10589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:30:15.884142   10589 main.go:141] libmachine: (addons-493204) Calling .GetState
	I1018 08:30:15.886085   10589 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1018 08:30:15.888032   10589 main.go:141] libmachine: (addons-493204) Calling .DriverName
	I1018 08:30:15.888130   10589 main.go:141] libmachine: (addons-493204) Calling .DriverName
	I1018 08:30:15.888219   10589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41005
	I1018 08:30:15.888564   10589 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:30:15.888829   10589 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1018 08:30:15.888855   10589 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1018 08:30:15.888961   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHHostname
	I1018 08:30:15.889573   10589 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:30:15.890081   10589 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 08:30:15.890105   10589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 08:30:15.890124   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHHostname
	I1018 08:30:15.890223   10589 main.go:141] libmachine: (addons-493204) Calling .GetState
	I1018 08:30:15.890303   10589 main.go:141] libmachine: Using API Version  1
	I1018 08:30:15.890321   10589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:30:15.891618   10589 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:30:15.891875   10589 main.go:141] libmachine: (addons-493204) Calling .GetState
	I1018 08:30:15.893304   10589 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1018 08:30:15.893338   10589 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1018 08:30:15.894103   10589 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:30:15.894346   10589 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1018 08:30:15.894894   10589 main.go:141] libmachine: Using API Version  1
	I1018 08:30:15.894931   10589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:30:15.895347   10589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38103
	I1018 08:30:15.895452   10589 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:30:15.895973   10589 main.go:141] libmachine: (addons-493204) Calling .GetState
	I1018 08:30:15.896635   10589 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1018 08:30:15.896652   10589 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1018 08:30:15.896684   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHHostname
	I1018 08:30:15.896807   10589 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1018 08:30:15.896833   10589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1018 08:30:15.896849   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHHostname
	I1018 08:30:15.897100   10589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34149
	I1018 08:30:15.898074   10589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39929
	I1018 08:30:15.899367   10589 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1018 08:30:15.899443   10589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35723
	I1018 08:30:15.900030   10589 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:30:15.899486   10589 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:30:15.899523   10589 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:30:15.900759   10589 main.go:141] libmachine: (addons-493204) Calling .DriverName
	I1018 08:30:15.900802   10589 main.go:141] libmachine: Using API Version  1
	I1018 08:30:15.900818   10589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:30:15.901249   10589 main.go:141] libmachine: Using API Version  1
	I1018 08:30:15.901268   10589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:30:15.901691   10589 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:30:15.902824   10589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:30:15.902871   10589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:30:15.903314   10589 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:30:15.903402   10589 main.go:141] libmachine: (addons-493204) Calling .DriverName
	I1018 08:30:15.903407   10589 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:30:15.903994   10589 main.go:141] libmachine: Using API Version  1
	I1018 08:30:15.904016   10589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:30:15.904094   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:30:15.904383   10589 main.go:141] libmachine: (addons-493204) Calling .GetState
	I1018 08:30:15.904419   10589 main.go:141] libmachine: Using API Version  1
	I1018 08:30:15.904435   10589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:30:15.904914   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:30:15.904971   10589 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:30:15.905376   10589 main.go:141] libmachine: (addons-493204) Calling .GetState
	I1018 08:30:15.905426   10589 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1018 08:30:15.905687   10589 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1018 08:30:15.905850   10589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38013
	I1018 08:30:15.906026   10589 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:30:15.906367   10589 main.go:141] libmachine: (addons-493204) Calling .DriverName
	I1018 08:30:15.906993   10589 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:30:15.907435   10589 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1018 08:30:15.907528   10589 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1018 08:30:15.907548   10589 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1018 08:30:15.907721   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHHostname
	I1018 08:30:15.907815   10589 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 08:30:15.907961   10589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1018 08:30:15.908168   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHHostname
	I1018 08:30:15.909597   10589 main.go:141] libmachine: (addons-493204) Calling .DriverName
	I1018 08:30:15.909716   10589 main.go:141] libmachine: (addons-493204) Calling .DriverName
	I1018 08:30:15.909867   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHPort
	I1018 08:30:15.909959   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHPort
	I1018 08:30:15.910319   10589 main.go:141] libmachine: (addons-493204) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:27:75", ip: ""} in network mk-addons-493204: {Iface:virbr1 ExpiryTime:2025-10-18 09:29:49 +0000 UTC Type:0 Mac:52:54:00:48:27:75 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-493204 Clientid:01:52:54:00:48:27:75}
	I1018 08:30:15.910339   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined IP address 192.168.39.58 and MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:30:15.910372   10589 main.go:141] libmachine: (addons-493204) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:27:75", ip: ""} in network mk-addons-493204: {Iface:virbr1 ExpiryTime:2025-10-18 09:29:49 +0000 UTC Type:0 Mac:52:54:00:48:27:75 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-493204 Clientid:01:52:54:00:48:27:75}
	I1018 08:30:15.910385   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined IP address 192.168.39.58 and MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:30:15.910392   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHKeyPath
	I1018 08:30:15.910419   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHKeyPath
	I1018 08:30:15.910453   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHPort
	I1018 08:30:15.910802   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHUsername
	I1018 08:30:15.910895   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHKeyPath
	I1018 08:30:15.911017   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:30:15.911061   10589 main.go:141] libmachine: (addons-493204) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:27:75", ip: ""} in network mk-addons-493204: {Iface:virbr1 ExpiryTime:2025-10-18 09:29:49 +0000 UTC Type:0 Mac:52:54:00:48:27:75 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-493204 Clientid:01:52:54:00:48:27:75}
	I1018 08:30:15.911080   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined IP address 192.168.39.58 and MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:30:15.911097   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHUsername
	I1018 08:30:15.911293   10589 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/addons-493204/id_rsa Username:docker}
	I1018 08:30:15.911366   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHUsername
	I1018 08:30:15.911684   10589 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/addons-493204/id_rsa Username:docker}
	I1018 08:30:15.911782   10589 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/addons-493204/id_rsa Username:docker}
	I1018 08:30:15.912006   10589 main.go:141] libmachine: (addons-493204) Calling .DriverName
	I1018 08:30:15.912754   10589 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 08:30:15.912896   10589 main.go:141] libmachine: Making call to close driver server
	I1018 08:30:15.912910   10589 main.go:141] libmachine: (addons-493204) Calling .Close
	I1018 08:30:15.913078   10589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45633
	I1018 08:30:15.913319   10589 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:30:15.913340   10589 main.go:141] libmachine: Using API Version  1
	I1018 08:30:15.913359   10589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:30:15.913345   10589 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:30:15.913435   10589 main.go:141] libmachine: Making call to close driver server
	I1018 08:30:15.913442   10589 main.go:141] libmachine: (addons-493204) Calling .Close
	I1018 08:30:15.913753   10589 main.go:141] libmachine: (addons-493204) DBG | Closing plugin on server side
	I1018 08:30:15.913877   10589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34625
	I1018 08:30:15.914760   10589 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:30:15.913977   10589 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:30:15.915254   10589 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:30:15.915359   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:30:15.914641   10589 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1018 08:30:15.914667   10589 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1018 08:30:15.915863   10589 main.go:141] libmachine: (addons-493204) Calling .GetState
	W1018 08:30:15.916196   10589 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1018 08:30:15.916651   10589 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:30:15.916963   10589 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:30:15.917085   10589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34341
	I1018 08:30:15.917619   10589 main.go:141] libmachine: Using API Version  1
	I1018 08:30:15.917867   10589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:30:15.917684   10589 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 08:30:15.918110   10589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1018 08:30:15.918196   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHHostname
	I1018 08:30:15.918445   10589 main.go:141] libmachine: Using API Version  1
	I1018 08:30:15.918488   10589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:30:15.918153   10589 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1018 08:30:15.918862   10589 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1018 08:30:15.918892   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHHostname
	I1018 08:30:15.918283   10589 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:30:15.917754   10589 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 08:30:15.918341   10589 main.go:141] libmachine: (addons-493204) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:27:75", ip: ""} in network mk-addons-493204: {Iface:virbr1 ExpiryTime:2025-10-18 09:29:49 +0000 UTC Type:0 Mac:52:54:00:48:27:75 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-493204 Clientid:01:52:54:00:48:27:75}
	I1018 08:30:15.919093   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined IP address 192.168.39.58 and MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:30:15.919217   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:30:15.919263   10589 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:30:15.919318   10589 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:30:15.919542   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHPort
	I1018 08:30:15.919704   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:30:15.920020   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHKeyPath
	I1018 08:30:15.920126   10589 main.go:141] libmachine: (addons-493204) Calling .GetState
	I1018 08:30:15.920175   10589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:30:15.920233   10589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:30:15.920508   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHUsername
	I1018 08:30:15.920692   10589 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/addons-493204/id_rsa Username:docker}
	I1018 08:30:15.921055   10589 main.go:141] libmachine: (addons-493204) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:27:75", ip: ""} in network mk-addons-493204: {Iface:virbr1 ExpiryTime:2025-10-18 09:29:49 +0000 UTC Type:0 Mac:52:54:00:48:27:75 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-493204 Clientid:01:52:54:00:48:27:75}
	I1018 08:30:15.921082   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined IP address 192.168.39.58 and MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:30:15.921130   10589 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 08:30:15.921210   10589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1018 08:30:15.921451   10589 main.go:141] libmachine: Using API Version  1
	I1018 08:30:15.921471   10589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:30:15.921543   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHPort
	I1018 08:30:15.921659   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHHostname
	I1018 08:30:15.922064   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHKeyPath
	I1018 08:30:15.922076   10589 main.go:141] libmachine: (addons-493204) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:27:75", ip: ""} in network mk-addons-493204: {Iface:virbr1 ExpiryTime:2025-10-18 09:29:49 +0000 UTC Type:0 Mac:52:54:00:48:27:75 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-493204 Clientid:01:52:54:00:48:27:75}
	I1018 08:30:15.922286   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined IP address 192.168.39.58 and MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:30:15.922092   10589 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:30:15.922511   10589 main.go:141] libmachine: (addons-493204) Calling .GetState
	I1018 08:30:15.922757   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHPort
	I1018 08:30:15.922763   10589 main.go:141] libmachine: (addons-493204) Calling .DriverName
	I1018 08:30:15.922798   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHUsername
	I1018 08:30:15.922936   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHKeyPath
	I1018 08:30:15.923049   10589 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/addons-493204/id_rsa Username:docker}
	I1018 08:30:15.923289   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHUsername
	I1018 08:30:15.923549   10589 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/addons-493204/id_rsa Username:docker}
	I1018 08:30:15.925002   10589 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1018 08:30:15.926236   10589 main.go:141] libmachine: (addons-493204) Calling .DriverName
	I1018 08:30:15.927016   10589 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 08:30:15.927036   10589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1018 08:30:15.927043   10589 main.go:141] libmachine: (addons-493204) Calling .DriverName
	I1018 08:30:15.927046   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:30:15.927058   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHHostname
	I1018 08:30:15.928829   10589 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1018 08:30:15.929064   10589 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1018 08:30:15.930597   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:30:15.930610   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHPort
	I1018 08:30:15.930631   10589 main.go:141] libmachine: (addons-493204) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:27:75", ip: ""} in network mk-addons-493204: {Iface:virbr1 ExpiryTime:2025-10-18 09:29:49 +0000 UTC Type:0 Mac:52:54:00:48:27:75 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-493204 Clientid:01:52:54:00:48:27:75}
	I1018 08:30:15.930604   10589 main.go:141] libmachine: (addons-493204) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:27:75", ip: ""} in network mk-addons-493204: {Iface:virbr1 ExpiryTime:2025-10-18 09:29:49 +0000 UTC Type:0 Mac:52:54:00:48:27:75 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-493204 Clientid:01:52:54:00:48:27:75}
	I1018 08:30:15.930642   10589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46341
	I1018 08:30:15.930647   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined IP address 192.168.39.58 and MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:30:15.930651   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:30:15.930660   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHPort
	I1018 08:30:15.930673   10589 main.go:141] libmachine: (addons-493204) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:27:75", ip: ""} in network mk-addons-493204: {Iface:virbr1 ExpiryTime:2025-10-18 09:29:49 +0000 UTC Type:0 Mac:52:54:00:48:27:75 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-493204 Clientid:01:52:54:00:48:27:75}
	I1018 08:30:15.930691   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined IP address 192.168.39.58 and MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:30:15.930652   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined IP address 192.168.39.58 and MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:30:15.930870   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHKeyPath
	I1018 08:30:15.930890   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHKeyPath
	I1018 08:30:15.931190   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHUsername
	I1018 08:30:15.931271   10589 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1018 08:30:15.931289   10589 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1018 08:30:15.931307   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHHostname
	I1018 08:30:15.931365   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHPort
	I1018 08:30:15.931414   10589 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:30:15.931522   10589 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/addons-493204/id_rsa Username:docker}
	I1018 08:30:15.931387   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHUsername
	I1018 08:30:15.931979   10589 main.go:141] libmachine: Using API Version  1
	I1018 08:30:15.931999   10589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:30:15.932130   10589 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/addons-493204/id_rsa Username:docker}
	I1018 08:30:15.932395   10589 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:30:15.932658   10589 main.go:141] libmachine: (addons-493204) Calling .GetState
	I1018 08:30:15.932720   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHKeyPath
	I1018 08:30:15.933021   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHUsername
	I1018 08:30:15.933046   10589 out.go:179]   - Using image docker.io/registry:3.0.0
	I1018 08:30:15.933203   10589 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/addons-493204/id_rsa Username:docker}
	I1018 08:30:15.934553   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:30:15.934599   10589 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1018 08:30:15.934609   10589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1018 08:30:15.934630   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHHostname
	I1018 08:30:15.935313   10589 main.go:141] libmachine: (addons-493204) Calling .DriverName
	I1018 08:30:15.935432   10589 main.go:141] libmachine: (addons-493204) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:27:75", ip: ""} in network mk-addons-493204: {Iface:virbr1 ExpiryTime:2025-10-18 09:29:49 +0000 UTC Type:0 Mac:52:54:00:48:27:75 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-493204 Clientid:01:52:54:00:48:27:75}
	I1018 08:30:15.935472   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined IP address 192.168.39.58 and MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:30:15.935508   10589 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 08:30:15.935522   10589 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 08:30:15.935536   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHHostname
	I1018 08:30:15.935988   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHPort
	I1018 08:30:15.936185   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHKeyPath
	I1018 08:30:15.936338   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHUsername
	I1018 08:30:15.936485   10589 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/addons-493204/id_rsa Username:docker}
	I1018 08:30:15.938001   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:30:15.938861   10589 main.go:141] libmachine: (addons-493204) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:27:75", ip: ""} in network mk-addons-493204: {Iface:virbr1 ExpiryTime:2025-10-18 09:29:49 +0000 UTC Type:0 Mac:52:54:00:48:27:75 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-493204 Clientid:01:52:54:00:48:27:75}
	I1018 08:30:15.938889   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined IP address 192.168.39.58 and MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:30:15.939141   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHPort
	I1018 08:30:15.939325   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHKeyPath
	I1018 08:30:15.939503   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHUsername
	I1018 08:30:15.939646   10589 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/addons-493204/id_rsa Username:docker}
	I1018 08:30:15.940071   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:30:15.940514   10589 main.go:141] libmachine: (addons-493204) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:27:75", ip: ""} in network mk-addons-493204: {Iface:virbr1 ExpiryTime:2025-10-18 09:29:49 +0000 UTC Type:0 Mac:52:54:00:48:27:75 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-493204 Clientid:01:52:54:00:48:27:75}
	I1018 08:30:15.940551   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined IP address 192.168.39.58 and MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:30:15.940583   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:30:15.940842   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHPort
	I1018 08:30:15.941034   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHKeyPath
	I1018 08:30:15.941165   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHUsername
	I1018 08:30:15.941193   10589 main.go:141] libmachine: (addons-493204) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:27:75", ip: ""} in network mk-addons-493204: {Iface:virbr1 ExpiryTime:2025-10-18 09:29:49 +0000 UTC Type:0 Mac:52:54:00:48:27:75 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-493204 Clientid:01:52:54:00:48:27:75}
	I1018 08:30:15.941263   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined IP address 192.168.39.58 and MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:30:15.941393   10589 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/addons-493204/id_rsa Username:docker}
	I1018 08:30:15.941516   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHPort
	I1018 08:30:15.941671   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHKeyPath
	I1018 08:30:15.941841   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHUsername
	I1018 08:30:15.941986   10589 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/addons-493204/id_rsa Username:docker}
	I1018 08:30:15.946467   10589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37881
	I1018 08:30:15.947055   10589 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:30:15.947650   10589 main.go:141] libmachine: Using API Version  1
	I1018 08:30:15.947667   10589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:30:15.948133   10589 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:30:15.948355   10589 main.go:141] libmachine: (addons-493204) Calling .GetState
	I1018 08:30:15.950664   10589 main.go:141] libmachine: (addons-493204) Calling .DriverName
	I1018 08:30:15.953064   10589 out.go:179]   - Using image docker.io/busybox:stable
	I1018 08:30:15.954624   10589 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1018 08:30:15.956003   10589 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 08:30:15.956024   10589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1018 08:30:15.956046   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHHostname
	I1018 08:30:15.959829   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:30:15.960344   10589 main.go:141] libmachine: (addons-493204) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:27:75", ip: ""} in network mk-addons-493204: {Iface:virbr1 ExpiryTime:2025-10-18 09:29:49 +0000 UTC Type:0 Mac:52:54:00:48:27:75 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-493204 Clientid:01:52:54:00:48:27:75}
	I1018 08:30:15.960389   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined IP address 192.168.39.58 and MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:30:15.960619   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHPort
	I1018 08:30:15.960813   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHKeyPath
	I1018 08:30:15.961016   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHUsername
	I1018 08:30:15.961151   10589 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/addons-493204/id_rsa Username:docker}
	W1018 08:30:16.148035   10589 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:48898->192.168.39.58:22: read: connection reset by peer
	I1018 08:30:16.148072   10589 retry.go:31] will retry after 177.931674ms: ssh: handshake failed: read tcp 192.168.39.1:48898->192.168.39.58:22: read: connection reset by peer
	W1018 08:30:16.181094   10589 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:48926->192.168.39.58:22: read: connection reset by peer
	I1018 08:30:16.181123   10589 retry.go:31] will retry after 339.533366ms: ssh: handshake failed: read tcp 192.168.39.1:48926->192.168.39.58:22: read: connection reset by peer
	W1018 08:30:16.181210   10589 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:48934->192.168.39.58:22: read: connection reset by peer
	I1018 08:30:16.181222   10589 retry.go:31] will retry after 275.226596ms: ssh: handshake failed: read tcp 192.168.39.1:48934->192.168.39.58:22: read: connection reset by peer
	I1018 08:30:16.329441   10589 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 08:30:16.329511   10589 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 08:30:16.520119   10589 node_ready.go:35] waiting up to 6m0s for node "addons-493204" to be "Ready" ...
	I1018 08:30:16.536515   10589 node_ready.go:49] node "addons-493204" is "Ready"
	I1018 08:30:16.536549   10589 node_ready.go:38] duration metric: took 16.388017ms for node "addons-493204" to be "Ready" ...
	I1018 08:30:16.536567   10589 api_server.go:52] waiting for apiserver process to appear ...
	I1018 08:30:16.536621   10589 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 08:30:16.572095   10589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 08:30:16.573230   10589 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1018 08:30:16.573265   10589 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1018 08:30:16.614107   10589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 08:30:16.662102   10589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 08:30:16.704413   10589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 08:30:16.781283   10589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 08:30:16.867174   10589 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1018 08:30:16.867201   10589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1018 08:30:16.878016   10589 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:16.878040   10589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1018 08:30:16.897489   10589 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1018 08:30:16.897513   10589 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1018 08:30:16.905071   10589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1018 08:30:16.943581   10589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 08:30:16.954187   10589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 08:30:17.193014   10589 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1018 08:30:17.193047   10589 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1018 08:30:17.202643   10589 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1018 08:30:17.202670   10589 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1018 08:30:17.478688   10589 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1018 08:30:17.478714   10589 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1018 08:30:17.484381   10589 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1018 08:30:17.484411   10589 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1018 08:30:17.615348   10589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 08:30:17.624689   10589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:17.637465   10589 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1018 08:30:17.637519   10589 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1018 08:30:17.834468   10589 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1018 08:30:17.834500   10589 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1018 08:30:18.077701   10589 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1018 08:30:18.077738   10589 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1018 08:30:18.145405   10589 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1018 08:30:18.145431   10589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1018 08:30:18.238865   10589 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1018 08:30:18.238897   10589 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1018 08:30:18.250451   10589 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 08:30:18.250487   10589 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1018 08:30:18.297571   10589 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1018 08:30:18.297636   10589 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1018 08:30:18.368588   10589 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1018 08:30:18.368621   10589 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1018 08:30:18.416766   10589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1018 08:30:18.660007   10589 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1018 08:30:18.660035   10589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1018 08:30:18.720756   10589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 08:30:18.813414   10589 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1018 08:30:18.813448   10589 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1018 08:30:18.872997   10589 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1018 08:30:18.873050   10589 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1018 08:30:19.096005   10589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1018 08:30:19.269942   10589 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 08:30:19.269967   10589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1018 08:30:19.274048   10589 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1018 08:30:19.274080   10589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1018 08:30:19.700668   10589 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1018 08:30:19.700699   10589 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1018 08:30:19.772827   10589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 08:30:19.961056   10589 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.631510211s)
	I1018 08:30:19.961099   10589 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1018 08:30:19.961103   10589 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.424457881s)
	I1018 08:30:19.961137   10589 api_server.go:72] duration metric: took 4.202008558s to wait for apiserver process to appear ...
	I1018 08:30:19.961145   10589 api_server.go:88] waiting for apiserver healthz status ...
	I1018 08:30:19.961168   10589 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1018 08:30:19.961173   10589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.389024878s)
	I1018 08:30:19.961232   10589 main.go:141] libmachine: Making call to close driver server
	I1018 08:30:19.961254   10589 main.go:141] libmachine: (addons-493204) Calling .Close
	I1018 08:30:19.961238   10589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.347094759s)
	I1018 08:30:19.961345   10589 main.go:141] libmachine: Making call to close driver server
	I1018 08:30:19.961358   10589 main.go:141] libmachine: (addons-493204) Calling .Close
	I1018 08:30:19.961596   10589 main.go:141] libmachine: (addons-493204) DBG | Closing plugin on server side
	I1018 08:30:19.961629   10589 main.go:141] libmachine: (addons-493204) DBG | Closing plugin on server side
	I1018 08:30:19.961632   10589 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:30:19.961644   10589 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:30:19.961655   10589 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:30:19.961660   10589 main.go:141] libmachine: Making call to close driver server
	I1018 08:30:19.961663   10589 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:30:19.961672   10589 main.go:141] libmachine: (addons-493204) Calling .Close
	I1018 08:30:19.961672   10589 main.go:141] libmachine: Making call to close driver server
	I1018 08:30:19.961726   10589 main.go:141] libmachine: (addons-493204) Calling .Close
	I1018 08:30:19.961910   10589 main.go:141] libmachine: (addons-493204) DBG | Closing plugin on server side
	I1018 08:30:19.961933   10589 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:30:19.961945   10589 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:30:19.961957   10589 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:30:19.961963   10589 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:30:19.983560   10589 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I1018 08:30:19.984676   10589 api_server.go:141] control plane version: v1.34.1
	I1018 08:30:19.984708   10589 api_server.go:131] duration metric: took 23.553763ms to wait for apiserver health ...
	I1018 08:30:19.984721   10589 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 08:30:19.991248   10589 main.go:141] libmachine: Making call to close driver server
	I1018 08:30:19.991280   10589 main.go:141] libmachine: (addons-493204) Calling .Close
	I1018 08:30:19.991621   10589 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:30:19.991665   10589 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:30:19.991633   10589 main.go:141] libmachine: (addons-493204) DBG | Closing plugin on server side
	I1018 08:30:20.000711   10589 system_pods.go:59] 10 kube-system pods found
	I1018 08:30:20.000755   10589 system_pods.go:61] "amd-gpu-device-plugin-zvmkr" [a57ff2db-5f8e-4afd-8617-5fbef4838726] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 08:30:20.000767   10589 system_pods.go:61] "coredns-66bc5c9577-5tlzw" [8f863c73-527f-4724-8985-f8ebaa854c64] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 08:30:20.000775   10589 system_pods.go:61] "coredns-66bc5c9577-jbtqc" [7fabdb6f-1da4-471b-8e53-9937a7448559] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 08:30:20.000784   10589 system_pods.go:61] "etcd-addons-493204" [51c257f6-1aba-4bd5-9e9d-f2fdbcd07b27] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 08:30:20.000793   10589 system_pods.go:61] "kube-apiserver-addons-493204" [76fb25b1-0df9-4b10-9bcd-8b76eac7b594] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 08:30:20.000810   10589 system_pods.go:61] "kube-controller-manager-addons-493204" [70cae341-e01b-40c0-9900-929f74125261] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 08:30:20.000820   10589 system_pods.go:61] "kube-proxy-s7lh5" [18ac3f80-31fe-451a-b231-a8bc84703255] Running
	I1018 08:30:20.000826   10589 system_pods.go:61] "kube-scheduler-addons-493204" [2a1310b8-dfde-4fe8-933d-e189437d7080] Running
	I1018 08:30:20.000837   10589 system_pods.go:61] "nvidia-device-plugin-daemonset-5crv7" [a4fe09fc-685f-4b43-959e-871a22fdb4c5] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 08:30:20.000851   10589 system_pods.go:61] "registry-creds-764b6fb674-g7n5w" [a5a61331-6e99-4939-8fee-84433cfa6c2c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 08:30:20.000862   10589 system_pods.go:74] duration metric: took 16.132811ms to wait for pod list to return data ...
	I1018 08:30:20.000877   10589 default_sa.go:34] waiting for default service account to be created ...
	I1018 08:30:20.007729   10589 default_sa.go:45] found service account: "default"
	I1018 08:30:20.007758   10589 default_sa.go:55] duration metric: took 6.873ms for default service account to be created ...
	I1018 08:30:20.007770   10589 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 08:30:20.014558   10589 system_pods.go:86] 10 kube-system pods found
	I1018 08:30:20.014606   10589 system_pods.go:89] "amd-gpu-device-plugin-zvmkr" [a57ff2db-5f8e-4afd-8617-5fbef4838726] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 08:30:20.014620   10589 system_pods.go:89] "coredns-66bc5c9577-5tlzw" [8f863c73-527f-4724-8985-f8ebaa854c64] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 08:30:20.014634   10589 system_pods.go:89] "coredns-66bc5c9577-jbtqc" [7fabdb6f-1da4-471b-8e53-9937a7448559] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 08:30:20.014643   10589 system_pods.go:89] "etcd-addons-493204" [51c257f6-1aba-4bd5-9e9d-f2fdbcd07b27] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 08:30:20.014651   10589 system_pods.go:89] "kube-apiserver-addons-493204" [76fb25b1-0df9-4b10-9bcd-8b76eac7b594] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 08:30:20.014662   10589 system_pods.go:89] "kube-controller-manager-addons-493204" [70cae341-e01b-40c0-9900-929f74125261] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 08:30:20.014672   10589 system_pods.go:89] "kube-proxy-s7lh5" [18ac3f80-31fe-451a-b231-a8bc84703255] Running
	I1018 08:30:20.014678   10589 system_pods.go:89] "kube-scheduler-addons-493204" [2a1310b8-dfde-4fe8-933d-e189437d7080] Running
	I1018 08:30:20.014687   10589 system_pods.go:89] "nvidia-device-plugin-daemonset-5crv7" [a4fe09fc-685f-4b43-959e-871a22fdb4c5] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 08:30:20.014700   10589 system_pods.go:89] "registry-creds-764b6fb674-g7n5w" [a5a61331-6e99-4939-8fee-84433cfa6c2c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 08:30:20.014712   10589 system_pods.go:126] duration metric: took 6.93254ms to wait for k8s-apps to be running ...
	I1018 08:30:20.014729   10589 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 08:30:20.014793   10589 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 08:30:20.323696   10589 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1018 08:30:20.323720   10589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1018 08:30:20.477001   10589 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-493204" context rescaled to 1 replicas
	I1018 08:30:20.648331   10589 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1018 08:30:20.648359   10589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1018 08:30:21.035322   10589 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 08:30:21.035365   10589 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1018 08:30:21.235104   10589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 08:30:23.379308   10589 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1018 08:30:23.379353   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHHostname
	I1018 08:30:23.383231   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:30:23.383726   10589 main.go:141] libmachine: (addons-493204) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:27:75", ip: ""} in network mk-addons-493204: {Iface:virbr1 ExpiryTime:2025-10-18 09:29:49 +0000 UTC Type:0 Mac:52:54:00:48:27:75 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-493204 Clientid:01:52:54:00:48:27:75}
	I1018 08:30:23.383767   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined IP address 192.168.39.58 and MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:30:23.383938   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHPort
	I1018 08:30:23.384247   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHKeyPath
	I1018 08:30:23.384427   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHUsername
	I1018 08:30:23.384621   10589 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/addons-493204/id_rsa Username:docker}
	I1018 08:30:23.627380   10589 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1018 08:30:23.707645   10589 addons.go:238] Setting addon gcp-auth=true in "addons-493204"
	I1018 08:30:23.707711   10589 host.go:66] Checking if "addons-493204" exists ...
	I1018 08:30:23.708076   10589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:30:23.708132   10589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:30:23.722255   10589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41389
	I1018 08:30:23.722751   10589 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:30:23.723248   10589 main.go:141] libmachine: Using API Version  1
	I1018 08:30:23.723280   10589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:30:23.723636   10589 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:30:23.724407   10589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:30:23.724454   10589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:30:23.739004   10589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45269
	I1018 08:30:23.739688   10589 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:30:23.740249   10589 main.go:141] libmachine: Using API Version  1
	I1018 08:30:23.740275   10589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:30:23.740670   10589 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:30:23.740892   10589 main.go:141] libmachine: (addons-493204) Calling .GetState
	I1018 08:30:23.742878   10589 main.go:141] libmachine: (addons-493204) Calling .DriverName
	I1018 08:30:23.743196   10589 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1018 08:30:23.743229   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHHostname
	I1018 08:30:23.747221   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:30:23.747906   10589 main.go:141] libmachine: (addons-493204) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:27:75", ip: ""} in network mk-addons-493204: {Iface:virbr1 ExpiryTime:2025-10-18 09:29:49 +0000 UTC Type:0 Mac:52:54:00:48:27:75 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-493204 Clientid:01:52:54:00:48:27:75}
	I1018 08:30:23.747962   10589 main.go:141] libmachine: (addons-493204) DBG | domain addons-493204 has defined IP address 192.168.39.58 and MAC address 52:54:00:48:27:75 in network mk-addons-493204
	I1018 08:30:23.748167   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHPort
	I1018 08:30:23.748493   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHKeyPath
	I1018 08:30:23.748685   10589 main.go:141] libmachine: (addons-493204) Calling .GetSSHUsername
	I1018 08:30:23.748841   10589 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/addons-493204/id_rsa Username:docker}
	I1018 08:30:25.673006   10589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.010862315s)
	I1018 08:30:25.673065   10589 main.go:141] libmachine: Making call to close driver server
	I1018 08:30:25.673077   10589 main.go:141] libmachine: (addons-493204) Calling .Close
	I1018 08:30:25.673116   10589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.968665101s)
	I1018 08:30:25.673171   10589 main.go:141] libmachine: Making call to close driver server
	I1018 08:30:25.673197   10589 main.go:141] libmachine: (addons-493204) Calling .Close
	I1018 08:30:25.673210   10589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.891896669s)
	I1018 08:30:25.673246   10589 main.go:141] libmachine: Making call to close driver server
	I1018 08:30:25.673263   10589 main.go:141] libmachine: (addons-493204) Calling .Close
	I1018 08:30:25.673284   10589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.768187795s)
	I1018 08:30:25.673305   10589 main.go:141] libmachine: Making call to close driver server
	I1018 08:30:25.673316   10589 main.go:141] libmachine: (addons-493204) Calling .Close
	I1018 08:30:25.673353   10589 main.go:141] libmachine: (addons-493204) DBG | Closing plugin on server side
	I1018 08:30:25.673370   10589 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:30:25.673385   10589 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:30:25.673391   10589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.729787516s)
	I1018 08:30:25.673394   10589 main.go:141] libmachine: Making call to close driver server
	I1018 08:30:25.673403   10589 main.go:141] libmachine: (addons-493204) Calling .Close
	I1018 08:30:25.673407   10589 main.go:141] libmachine: Making call to close driver server
	I1018 08:30:25.673415   10589 main.go:141] libmachine: (addons-493204) Calling .Close
	I1018 08:30:25.673449   10589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (8.719228556s)
	I1018 08:30:25.673473   10589 main.go:141] libmachine: Making call to close driver server
	I1018 08:30:25.673483   10589 main.go:141] libmachine: (addons-493204) Calling .Close
	I1018 08:30:25.673513   10589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.058131596s)
	I1018 08:30:25.673530   10589 main.go:141] libmachine: Making call to close driver server
	I1018 08:30:25.673539   10589 main.go:141] libmachine: (addons-493204) Calling .Close
	I1018 08:30:25.673609   10589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (8.048888271s)
	W1018 08:30:25.673636   10589 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:25.673658   10589 retry.go:31] will retry after 133.229836ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:25.673718   10589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.25692414s)
	I1018 08:30:25.673738   10589 main.go:141] libmachine: Making call to close driver server
	I1018 08:30:25.673748   10589 main.go:141] libmachine: (addons-493204) Calling .Close
	I1018 08:30:25.673861   10589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.953075698s)
	I1018 08:30:25.673880   10589 main.go:141] libmachine: Making call to close driver server
	I1018 08:30:25.673889   10589 main.go:141] libmachine: (addons-493204) Calling .Close
	I1018 08:30:25.673997   10589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.577951918s)
	I1018 08:30:25.674018   10589 main.go:141] libmachine: Making call to close driver server
	I1018 08:30:25.674032   10589 main.go:141] libmachine: (addons-493204) Calling .Close
	I1018 08:30:25.674079   10589 main.go:141] libmachine: (addons-493204) DBG | Closing plugin on server side
	I1018 08:30:25.674101   10589 main.go:141] libmachine: (addons-493204) DBG | Closing plugin on server side
	I1018 08:30:25.674103   10589 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:30:25.674104   10589 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:30:25.674111   10589 main.go:141] libmachine: (addons-493204) DBG | Closing plugin on server side
	I1018 08:30:25.674115   10589 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:30:25.674118   10589 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:30:25.674126   10589 main.go:141] libmachine: Making call to close driver server
	I1018 08:30:25.674128   10589 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:30:25.674135   10589 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:30:25.674128   10589 main.go:141] libmachine: Making call to close driver server
	I1018 08:30:25.674138   10589 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:30:25.674145   10589 main.go:141] libmachine: (addons-493204) Calling .Close
	I1018 08:30:25.674150   10589 addons.go:479] Verifying addon ingress=true in "addons-493204"
	I1018 08:30:25.674136   10589 main.go:141] libmachine: (addons-493204) Calling .Close
	I1018 08:30:25.674170   10589 main.go:141] libmachine: (addons-493204) DBG | Closing plugin on server side
	I1018 08:30:25.674196   10589 main.go:141] libmachine: (addons-493204) DBG | Closing plugin on server side
	I1018 08:30:25.674204   10589 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:30:25.674211   10589 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:30:25.675075   10589 main.go:141] libmachine: (addons-493204) DBG | Closing plugin on server side
	I1018 08:30:25.675108   10589 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:30:25.675120   10589 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:30:25.675197   10589 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:30:25.674219   10589 main.go:141] libmachine: Making call to close driver server
	I1018 08:30:25.675254   10589 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:30:25.675272   10589 main.go:141] libmachine: (addons-493204) Calling .Close
	I1018 08:30:25.675282   10589 main.go:141] libmachine: Making call to close driver server
	I1018 08:30:25.675291   10589 main.go:141] libmachine: (addons-493204) Calling .Close
	I1018 08:30:25.674153   10589 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:30:25.674236   10589 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:30:25.675351   10589 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:30:25.675359   10589 main.go:141] libmachine: Making call to close driver server
	I1018 08:30:25.675365   10589 main.go:141] libmachine: (addons-493204) Calling .Close
	I1018 08:30:25.675510   10589 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:30:25.675527   10589 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:30:25.675737   10589 main.go:141] libmachine: (addons-493204) DBG | Closing plugin on server side
	I1018 08:30:25.675769   10589 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:30:25.675778   10589 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:30:25.675787   10589 main.go:141] libmachine: Making call to close driver server
	I1018 08:30:25.675795   10589 main.go:141] libmachine: (addons-493204) Calling .Close
	I1018 08:30:25.676721   10589 main.go:141] libmachine: (addons-493204) DBG | Closing plugin on server side
	I1018 08:30:25.676752   10589 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:30:25.676758   10589 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:30:25.676765   10589 main.go:141] libmachine: Making call to close driver server
	I1018 08:30:25.676770   10589 main.go:141] libmachine: (addons-493204) Calling .Close
	I1018 08:30:25.676824   10589 main.go:141] libmachine: (addons-493204) DBG | Closing plugin on server side
	I1018 08:30:25.676844   10589 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:30:25.676851   10589 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:30:25.676857   10589 main.go:141] libmachine: Making call to close driver server
	I1018 08:30:25.676863   10589 main.go:141] libmachine: (addons-493204) Calling .Close
	I1018 08:30:25.677450   10589 main.go:141] libmachine: (addons-493204) DBG | Closing plugin on server side
	I1018 08:30:25.677502   10589 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:30:25.677510   10589 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:30:25.677519   10589 addons.go:479] Verifying addon metrics-server=true in "addons-493204"
	I1018 08:30:25.675340   10589 main.go:141] libmachine: Making call to close driver server
	I1018 08:30:25.677627   10589 main.go:141] libmachine: (addons-493204) Calling .Close
	I1018 08:30:25.677827   10589 main.go:141] libmachine: (addons-493204) DBG | Closing plugin on server side
	I1018 08:30:25.677861   10589 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:30:25.677868   10589 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:30:25.678144   10589 main.go:141] libmachine: (addons-493204) DBG | Closing plugin on server side
	I1018 08:30:25.678229   10589 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:30:25.678271   10589 main.go:141] libmachine: (addons-493204) DBG | Closing plugin on server side
	I1018 08:30:25.678256   10589 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:30:25.678284   10589 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:30:25.678271   10589 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:30:25.678295   10589 addons.go:479] Verifying addon registry=true in "addons-493204"
	I1018 08:30:25.678647   10589 main.go:141] libmachine: (addons-493204) DBG | Closing plugin on server side
	I1018 08:30:25.678866   10589 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:30:25.678875   10589 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:30:25.679021   10589 out.go:179] * Verifying ingress addon...
	I1018 08:30:25.681333   10589 out.go:179] * Verifying registry addon...
	I1018 08:30:25.681346   10589 main.go:141] libmachine: (addons-493204) DBG | Closing plugin on server side
	I1018 08:30:25.681377   10589 main.go:141] libmachine: (addons-493204) DBG | Closing plugin on server side
	I1018 08:30:25.681409   10589 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:30:25.681420   10589 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:30:25.681495   10589 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:30:25.681514   10589 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:30:25.682239   10589 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1018 08:30:25.683083   10589 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-493204 service yakd-dashboard -n yakd-dashboard
	
	I1018 08:30:25.683674   10589 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1018 08:30:25.762623   10589 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1018 08:30:25.762646   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:25.767322   10589 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 08:30:25.767343   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:25.807785   10589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:25.817371   10589 main.go:141] libmachine: Making call to close driver server
	I1018 08:30:25.817392   10589 main.go:141] libmachine: (addons-493204) Calling .Close
	I1018 08:30:25.817655   10589 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:30:25.817676   10589 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:30:26.214733   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:26.214764   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:26.398409   10589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.625533299s)
	I1018 08:30:26.398437   10589 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (6.383621377s)
	I1018 08:30:26.398458   10589 system_svc.go:56] duration metric: took 6.383727028s WaitForService to wait for kubelet
	W1018 08:30:26.398456   10589 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 08:30:26.398477   10589 retry.go:31] will retry after 297.469099ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 08:30:26.398468   10589 kubeadm.go:586] duration metric: took 10.639338461s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 08:30:26.398496   10589 node_conditions.go:102] verifying NodePressure condition ...
	I1018 08:30:26.426626   10589 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1018 08:30:26.426665   10589 node_conditions.go:123] node cpu capacity is 2
	I1018 08:30:26.426681   10589 node_conditions.go:105] duration metric: took 28.174187ms to run NodePressure ...
	I1018 08:30:26.426694   10589 start.go:241] waiting for startup goroutines ...
	I1018 08:30:26.697085   10589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 08:30:26.713677   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:26.714768   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:27.214743   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:27.215850   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:27.272186   10589 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.528962462s)
	I1018 08:30:27.272879   10589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.037724111s)
	I1018 08:30:27.272946   10589 main.go:141] libmachine: Making call to close driver server
	I1018 08:30:27.272962   10589 main.go:141] libmachine: (addons-493204) Calling .Close
	I1018 08:30:27.273303   10589 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:30:27.273317   10589 main.go:141] libmachine: (addons-493204) DBG | Closing plugin on server side
	I1018 08:30:27.273321   10589 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:30:27.273339   10589 main.go:141] libmachine: Making call to close driver server
	I1018 08:30:27.273347   10589 main.go:141] libmachine: (addons-493204) Calling .Close
	I1018 08:30:27.273615   10589 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:30:27.273631   10589 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:30:27.273639   10589 main.go:141] libmachine: (addons-493204) DBG | Closing plugin on server side
	I1018 08:30:27.273640   10589 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-493204"
	I1018 08:30:27.274345   10589 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 08:30:27.275360   10589 out.go:179] * Verifying csi-hostpath-driver addon...
	I1018 08:30:27.277153   10589 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1018 08:30:27.277687   10589 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1018 08:30:27.278604   10589 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1018 08:30:27.278630   10589 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1018 08:30:27.284839   10589 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 08:30:27.284871   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:27.429143   10589 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1018 08:30:27.429179   10589 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1018 08:30:27.552638   10589 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 08:30:27.552664   10589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1018 08:30:27.660467   10589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 08:30:27.689485   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:27.690243   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:27.783832   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:28.202721   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:28.219730   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:28.289886   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:28.689845   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:28.690023   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:28.783298   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:29.187980   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:29.189378   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:29.292700   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:29.569272   10589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.761444926s)
	W1018 08:30:29.569343   10589 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:29.569370   10589 retry.go:31] will retry after 496.656788ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:29.569400   10589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.872273367s)
	I1018 08:30:29.569452   10589 main.go:141] libmachine: Making call to close driver server
	I1018 08:30:29.569472   10589 main.go:141] libmachine: (addons-493204) Calling .Close
	I1018 08:30:29.569829   10589 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:30:29.569847   10589 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:30:29.569858   10589 main.go:141] libmachine: Making call to close driver server
	I1018 08:30:29.569893   10589 main.go:141] libmachine: (addons-493204) Calling .Close
	I1018 08:30:29.570159   10589 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:30:29.570177   10589 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:30:29.686133   10589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.025607575s)
	I1018 08:30:29.686202   10589 main.go:141] libmachine: Making call to close driver server
	I1018 08:30:29.686221   10589 main.go:141] libmachine: (addons-493204) Calling .Close
	I1018 08:30:29.686532   10589 main.go:141] libmachine: (addons-493204) DBG | Closing plugin on server side
	I1018 08:30:29.686594   10589 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:30:29.686615   10589 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:30:29.686630   10589 main.go:141] libmachine: Making call to close driver server
	I1018 08:30:29.686639   10589 main.go:141] libmachine: (addons-493204) Calling .Close
	I1018 08:30:29.686848   10589 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:30:29.686867   10589 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:30:29.686894   10589 main.go:141] libmachine: (addons-493204) DBG | Closing plugin on server side
	I1018 08:30:29.688018   10589 addons.go:479] Verifying addon gcp-auth=true in "addons-493204"
	I1018 08:30:29.689890   10589 out.go:179] * Verifying gcp-auth addon...
	I1018 08:30:29.692016   10589 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1018 08:30:29.700795   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:29.713665   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:29.792562   10589 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1018 08:30:29.792585   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:29.793485   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:30.066880   10589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:30.210709   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:30.213150   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:30.213509   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:30.285396   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:30.689995   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:30.693147   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:30.700347   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:30.790771   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:31.199359   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:31.200805   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:31.205378   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:31.283524   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:31.626088   10589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.559168301s)
	W1018 08:30:31.626164   10589 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:31.626185   10589 retry.go:31] will retry after 592.097368ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:31.690934   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:31.691370   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:31.699179   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:31.795419   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:32.191642   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:32.193483   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:32.199307   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:32.218457   10589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:32.285352   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:32.690159   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:32.690535   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:32.699355   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:32.784789   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:33.189629   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:33.189904   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:33.197661   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:33.294193   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:33.674233   10589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.45574267s)
	W1018 08:30:33.674273   10589 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:33.674293   10589 retry.go:31] will retry after 434.251739ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:33.688449   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:33.695904   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:33.696618   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:33.782791   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:34.108838   10589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:34.189159   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:34.191137   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:34.198664   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:34.283511   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:34.689897   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:34.690200   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:34.699300   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:34.783473   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:35.190177   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:35.195029   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:35.197279   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:35.239959   10589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.131076794s)
	W1018 08:30:35.240005   10589 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:35.240028   10589 retry.go:31] will retry after 1.565705509s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:35.286142   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:35.688142   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:35.691385   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:35.697069   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:35.783376   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:36.188758   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:36.189649   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:36.198622   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:36.285207   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:36.687505   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:36.688967   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:36.698083   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:36.786492   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:36.806499   10589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:37.190033   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:37.190772   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:37.197492   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:37.283475   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:37.688854   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:37.690220   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:37.696059   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:37.787826   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:37.932020   10589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.125481238s)
	W1018 08:30:37.932073   10589 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:37.932092   10589 retry.go:31] will retry after 1.420502735s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:38.190227   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:38.191476   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:38.199016   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:38.281420   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:38.689262   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:38.689735   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:38.699683   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:38.783117   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:39.353185   10589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:39.421743   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:39.421824   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:39.422319   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:39.422424   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:39.690257   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:39.690453   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:39.696254   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:39.786517   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:40.189516   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:40.189741   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:40.197948   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:40.284951   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:40.416103   10589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.062867947s)
	W1018 08:30:40.416152   10589 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:40.416177   10589 retry.go:31] will retry after 2.862721362s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:40.689680   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:40.690940   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:40.702479   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:40.783562   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:41.185897   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:41.187976   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:41.196341   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:41.283302   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:42.044070   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:42.044101   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:42.044229   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:42.045468   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:42.197781   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:42.198330   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:42.199619   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:42.283819   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:42.687368   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:42.688816   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:42.696076   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:42.782254   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:43.186553   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:43.187422   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:43.195900   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:43.279242   10589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:43.283377   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:43.686866   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:43.688871   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:43.696618   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:43.784611   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:44.191872   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:44.195285   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:44.200550   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 08:30:44.253903   10589 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:44.253967   10589 retry.go:31] will retry after 4.766933532s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:44.281988   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:44.925723   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:44.926011   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:44.926322   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:44.926353   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:45.198797   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:45.198820   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:45.199282   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:45.285750   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:45.687211   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:45.688154   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:45.694943   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:45.781409   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:46.188122   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:46.188227   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:46.195409   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:46.282153   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:46.687531   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:46.687835   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:46.696485   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:46.782414   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:47.186561   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:47.186580   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:47.195747   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:47.282629   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:47.688017   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:47.688061   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:47.696833   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:47.782905   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:48.188288   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:48.188795   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:48.196501   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:48.283063   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:48.688198   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:48.689402   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:48.696305   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:48.782328   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:49.021556   10589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:49.187234   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:49.189719   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:49.204481   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:49.282460   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:49.689032   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:49.690058   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:49.697287   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:49.782687   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:30:49.788843   10589 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:49.788886   10589 retry.go:31] will retry after 6.726287255s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:50.189727   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:50.189950   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:50.196902   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:50.281466   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:50.687993   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:50.689553   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:50.700082   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:50.784621   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:51.188083   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:51.189127   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:51.195630   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:51.281773   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:51.689193   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:51.690740   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:51.697479   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:51.784856   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:52.189297   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:52.189632   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:52.196152   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:52.284999   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:52.689787   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:52.690111   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:52.698508   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:52.781936   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:53.187763   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:53.191691   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:53.204122   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:53.281666   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:53.688859   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:53.689245   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:53.696858   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:53.782263   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:54.202709   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:54.202723   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:54.208104   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:54.294599   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:54.688950   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:54.689386   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:54.696784   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:54.790211   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:55.188117   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:55.188395   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:55.199312   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:55.281770   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:55.687993   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:55.688477   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:55.695670   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:55.781521   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:56.187116   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:56.187246   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:56.195449   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:56.286880   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:56.516183   10589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:56.690728   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:56.692578   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:56.697651   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:56.784163   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:57.190080   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:57.190871   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:57.196663   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:57.284657   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:57.694039   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:57.694210   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:57.698955   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:57.785759   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:57.812099   10589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.295869411s)
	W1018 08:30:57.812140   10589 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:57.812161   10589 retry.go:31] will retry after 13.98496483s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:58.190893   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:58.191727   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:58.197086   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:58.284263   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:58.687801   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:58.690215   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:58.696327   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:58.977284   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:59.196258   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:59.196485   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:59.199134   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:59.284569   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:59.685954   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:59.688297   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:59.697756   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:59.781301   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:00.188246   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:00.188245   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:00.198533   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:00.518264   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:00.689137   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:00.695504   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:00.699063   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:00.784542   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:01.191912   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:01.193486   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:01.200100   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:01.284615   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:01.713385   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:01.713451   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:01.722390   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:01.790683   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:02.188865   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:02.189486   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:02.195954   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:02.283499   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:02.694251   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:02.694976   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:02.699425   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:02.793245   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:03.189831   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:03.190839   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:03.196937   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:03.283509   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:03.689338   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:03.690024   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:03.701018   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:03.783856   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:04.188350   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:04.190482   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:04.196329   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:04.283390   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:04.998980   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:04.999249   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:04.999900   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:05.000941   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:05.192013   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:05.193155   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:05.197552   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:05.291550   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:05.686430   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:05.686761   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:05.696712   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:05.783833   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:06.189344   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:06.192352   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:06.196022   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:06.286149   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:06.689805   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:06.690629   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:06.696240   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:06.786542   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:07.190170   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:07.190406   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:07.198660   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:07.291263   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:07.690685   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:07.690899   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:07.696285   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:07.782194   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:08.187140   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:08.189030   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:08.198406   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:08.282645   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:08.687694   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:08.688264   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:08.697751   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:08.787680   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:09.188734   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:09.189475   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:09.198405   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:09.288120   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:09.699599   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:09.700288   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:09.701342   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:09.782385   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:10.439840   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:10.440899   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:10.440955   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:10.441170   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:10.689659   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:10.689765   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:10.696637   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:10.791620   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:11.186950   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:11.188842   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:11.196855   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:11.281478   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:11.694478   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:11.695162   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:11.696245   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:11.782436   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:11.797349   10589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:31:12.190552   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:12.193193   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:12.196981   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:12.281740   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:31:12.518081   10589 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:31:12.518121   10589 retry.go:31] will retry after 13.998147862s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:31:12.690277   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:12.696489   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:12.698005   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:12.781404   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:13.201385   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:13.201566   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:13.205628   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:13.282846   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:13.690155   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:13.690359   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:13.695940   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:13.783459   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:14.192086   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:14.192131   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:14.196814   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:14.281546   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:14.688000   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:14.688095   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:14.698630   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:14.788373   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:15.185864   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:15.187190   10589 kapi.go:107] duration metric: took 49.503515597s to wait for kubernetes.io/minikube-addons=registry ...
	I1018 08:31:15.195053   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:15.282181   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:15.686351   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:15.697458   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:15.784379   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:16.188638   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:16.197540   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:16.282961   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:16.688563   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:16.697207   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:16.784628   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:17.188025   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:17.196542   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:17.286791   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:17.687182   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:17.696468   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:17.782355   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:18.191107   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:18.201272   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:18.283862   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:18.687078   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:18.697971   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:18.782381   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:19.187836   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:19.195694   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:19.281874   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:19.687874   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:19.695570   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:19.783263   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:20.193124   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:20.199577   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:20.283108   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:20.686525   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:20.695531   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:20.782365   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:21.186860   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:21.196619   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:21.287609   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:21.686531   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:21.695734   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:21.781263   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:22.187379   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:22.195481   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:22.282154   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:22.686824   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:22.696874   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:22.783315   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:23.189770   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:23.198669   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:23.281572   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:23.687101   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:23.696397   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:23.781849   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:24.189213   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:24.195268   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:24.287577   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:24.687540   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:24.695849   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:24.782100   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:25.186041   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:25.196519   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:25.287698   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:25.687781   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:25.695988   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:25.784554   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:26.187818   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:26.196591   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:26.285166   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:26.517377   10589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:31:26.688252   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:26.697271   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:26.785413   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:27.186090   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:27.197438   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:27.283128   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:27.664688   10589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.147268407s)
	W1018 08:31:27.664734   10589 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:31:27.664757   10589 retry.go:31] will retry after 11.106327408s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:31:27.689830   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:27.697284   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:27.784752   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:28.187458   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:28.195201   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:28.284969   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:28.689671   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:28.695613   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:28.782547   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:29.186818   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:29.195398   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:29.283518   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:29.687236   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:29.696684   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:29.787630   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:30.201202   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:30.201368   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:30.303293   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:30.693288   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:30.698232   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:30.782389   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:31.188309   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:31.196297   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:31.288903   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:31.688244   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:31.699450   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:31.782051   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:32.190229   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:32.196021   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:32.282392   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:32.686710   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:32.695598   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:32.784429   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:33.189363   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:33.196341   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:33.283742   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:33.687308   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:33.696241   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:33.788425   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:34.186549   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:34.196302   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:34.287647   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:34.690506   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:34.696991   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:34.785250   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:35.186981   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:35.198693   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:35.293311   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:35.687865   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:35.697550   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:35.782747   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:36.188007   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:36.289080   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:36.289158   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:36.686339   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:36.695593   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:36.782534   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:37.186196   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:37.196137   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:37.282406   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:37.687730   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:37.697628   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:37.785271   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:38.379214   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:38.383675   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:38.386546   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:38.689755   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:38.698616   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:38.771819   10589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:31:38.788388   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:39.187190   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:39.197170   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:39.282745   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:39.688677   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:39.697291   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:39.783884   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:40.201360   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:40.206200   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:40.292200   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:40.313792   10589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.541937923s)
	W1018 08:31:40.313829   10589 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:31:40.313880   10589 main.go:141] libmachine: Making call to close driver server
	I1018 08:31:40.313895   10589 main.go:141] libmachine: (addons-493204) Calling .Close
	I1018 08:31:40.314271   10589 main.go:141] libmachine: (addons-493204) DBG | Closing plugin on server side
	I1018 08:31:40.314271   10589 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:31:40.314302   10589 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:31:40.314311   10589 main.go:141] libmachine: Making call to close driver server
	I1018 08:31:40.314319   10589 main.go:141] libmachine: (addons-493204) Calling .Close
	I1018 08:31:40.314593   10589 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:31:40.314605   10589 main.go:141] libmachine: (addons-493204) DBG | Closing plugin on server side
	I1018 08:31:40.314615   10589 main.go:141] libmachine: Making call to close connection to plugin binary
	W1018 08:31:40.314704   10589 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1018 08:31:40.686689   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:40.696456   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:40.789509   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:41.189174   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:41.197828   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:41.283244   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:41.688463   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:41.697269   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:41.783962   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:42.187510   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:42.199761   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:42.286915   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:42.851309   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:42.851556   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:42.852199   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:43.187113   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:43.196520   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:43.406910   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:43.686970   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:43.696806   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:43.783785   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:44.186952   10589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:44.196234   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:44.284531   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:44.689534   10589 kapi.go:107] duration metric: took 1m19.007296825s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1018 08:31:44.696719   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:44.784385   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:45.197079   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:45.284492   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:45.696472   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:45.782792   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:46.197053   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:46.282566   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:46.696199   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:46.782412   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:47.196645   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:47.283521   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:47.696124   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:47.782893   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:48.196391   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:48.283645   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:48.697665   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:48.783536   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:49.196438   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:49.283703   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:49.696569   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:49.801515   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:50.197722   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:50.282848   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:50.698420   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:50.784473   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:51.196291   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:51.281807   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:51.701558   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:51.785556   10589 kapi.go:107] duration metric: took 1m24.507866402s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1018 08:31:52.196655   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:52.695649   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:53.196009   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:53.697049   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:54.195633   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:54.695299   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:55.196242   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:55.697454   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:56.196083   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:56.698982   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:57.195577   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:57.697343   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:58.195996   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:58.695661   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:59.196421   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:59.697656   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:00.199062   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:00.696561   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:01.196618   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:01.696414   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:02.196142   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:02.698351   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:03.196253   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:03.696881   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:04.195885   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:04.695581   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:05.196695   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:05.696290   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:06.196571   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:06.696486   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:07.197269   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:07.696909   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:08.195282   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:08.696601   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:09.196351   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:09.697487   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:10.196784   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:10.695826   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:11.196856   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:11.696593   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:12.197362   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:12.697116   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:13.195700   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:13.695050   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:14.196631   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:14.695574   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:15.200245   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:15.696454   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:16.195656   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:16.695723   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:17.195528   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:17.697348   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:18.196298   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:18.696470   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:19.196530   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:19.696015   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:20.197114   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:20.696536   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:21.196560   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:21.695970   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:22.196949   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:22.697284   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:23.196217   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:23.695954   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:24.196819   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:24.695324   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:25.198169   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:25.696392   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:26.197388   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:26.696227   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:27.196333   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:27.697890   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:28.196986   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:28.696290   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:29.196548   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:29.697481   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:30.200317   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:30.696667   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:31.195812   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:31.696422   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:32.197079   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:32.696069   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:33.196120   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:33.696337   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:34.197060   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:34.696522   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:35.196179   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:35.695809   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:36.197192   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:36.696651   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:37.196907   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:37.696407   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:38.198279   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:38.697300   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:39.196138   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:39.696551   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:40.197882   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:40.695276   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:41.196891   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:41.696416   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:42.197456   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:42.697284   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:43.195759   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:43.696054   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:44.197779   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:44.697677   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:45.196180   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:45.695868   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:46.196777   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:46.697581   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:47.196999   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:47.697420   10589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:48.199810   10589 kapi.go:107] duration metric: took 2m18.507791588s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1018 08:32:48.201843   10589 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-493204 cluster.
	I1018 08:32:48.203422   10589 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1018 08:32:48.205042   10589 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1018 08:32:48.206688   10589 out.go:179] * Enabled addons: registry-creds, default-storageclass, amd-gpu-device-plugin, metrics-server, nvidia-device-plugin, ingress-dns, storage-provisioner, cloud-spanner, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1018 08:32:48.208123   10589 addons.go:514] duration metric: took 2m32.449013124s for enable addons: enabled=[registry-creds default-storageclass amd-gpu-device-plugin metrics-server nvidia-device-plugin ingress-dns storage-provisioner cloud-spanner yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1018 08:32:48.208178   10589 start.go:246] waiting for cluster config update ...
	I1018 08:32:48.208201   10589 start.go:255] writing updated cluster config ...
	I1018 08:32:48.208500   10589 ssh_runner.go:195] Run: rm -f paused
	I1018 08:32:48.218441   10589 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 08:32:48.226758   10589 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jbtqc" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:32:48.234627   10589 pod_ready.go:94] pod "coredns-66bc5c9577-jbtqc" is "Ready"
	I1018 08:32:48.234658   10589 pod_ready.go:86] duration metric: took 7.873141ms for pod "coredns-66bc5c9577-jbtqc" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:32:48.237026   10589 pod_ready.go:83] waiting for pod "etcd-addons-493204" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:32:48.244550   10589 pod_ready.go:94] pod "etcd-addons-493204" is "Ready"
	I1018 08:32:48.244574   10589 pod_ready.go:86] duration metric: took 7.527941ms for pod "etcd-addons-493204" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:32:48.247305   10589 pod_ready.go:83] waiting for pod "kube-apiserver-addons-493204" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:32:48.255434   10589 pod_ready.go:94] pod "kube-apiserver-addons-493204" is "Ready"
	I1018 08:32:48.255460   10589 pod_ready.go:86] duration metric: took 8.135834ms for pod "kube-apiserver-addons-493204" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:32:48.258891   10589 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-493204" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:32:48.622870   10589 pod_ready.go:94] pod "kube-controller-manager-addons-493204" is "Ready"
	I1018 08:32:48.622914   10589 pod_ready.go:86] duration metric: took 363.971788ms for pod "kube-controller-manager-addons-493204" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:32:48.822663   10589 pod_ready.go:83] waiting for pod "kube-proxy-s7lh5" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:32:49.223407   10589 pod_ready.go:94] pod "kube-proxy-s7lh5" is "Ready"
	I1018 08:32:49.223434   10589 pod_ready.go:86] duration metric: took 400.744406ms for pod "kube-proxy-s7lh5" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:32:49.423474   10589 pod_ready.go:83] waiting for pod "kube-scheduler-addons-493204" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:32:49.824324   10589 pod_ready.go:94] pod "kube-scheduler-addons-493204" is "Ready"
	I1018 08:32:49.824353   10589 pod_ready.go:86] duration metric: took 400.852296ms for pod "kube-scheduler-addons-493204" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:32:49.824364   10589 pod_ready.go:40] duration metric: took 1.605888192s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 08:32:49.871588   10589 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 08:32:49.873741   10589 out.go:179] * Done! kubectl is now configured to use "addons-493204" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 08:35:45 addons-493204 crio[818]: time="2025-10-18 08:35:45.145520163Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bebb9066-eea0-4062-b4a3-594ee1b7f744 name=/runtime.v1.RuntimeService/Version
	Oct 18 08:35:45 addons-493204 crio[818]: time="2025-10-18 08:35:45.146354533Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bebb9066-eea0-4062-b4a3-594ee1b7f744 name=/runtime.v1.RuntimeService/Version
	Oct 18 08:35:45 addons-493204 crio[818]: time="2025-10-18 08:35:45.149662768Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=702e2329-9bcf-4019-8c46-50f66b6f037b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 08:35:45 addons-493204 crio[818]: time="2025-10-18 08:35:45.150936556Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760776545150906796,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598024,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=702e2329-9bcf-4019-8c46-50f66b6f037b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 08:35:45 addons-493204 crio[818]: time="2025-10-18 08:35:45.151674056Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1089c66e-6348-473b-8fb4-461f72df384f name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 08:35:45 addons-493204 crio[818]: time="2025-10-18 08:35:45.151746925Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1089c66e-6348-473b-8fb4-461f72df384f name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 08:35:45 addons-493204 crio[818]: time="2025-10-18 08:35:45.152175625Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c85f34b014c895af3588a46a4668768f6d23452c23497db46b825e18938391ca,PodSandboxId:68d2ab41d1537adb3359520c4ffdbd579d82c09bf8bd89f71cfddff26869741e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760776401952864069,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4867d583-45c3-4d54-ab34-d50cc052e2ca,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f6038f5e65934728cebef82f4d96fe61d4ec1519256256ae3c8ba16e44e304b,PodSandboxId:4c325e5a526f8d3fb61561211a708b53f33e04ebf3abbcff6d9e918a07d52e9b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760776374531895137,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e555c26b-13ad-4fca-a7c2-7ac393455c96,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acb4b17770ea1a8b7de5f94c70f62d8a325f376f4dc76a5ff73d8ee4e72cf6e2,PodSandboxId:77ce7df9efdbf83311484eb2ba28c5ca33a5a418c7e0cc9120d2b5f7ce025242,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760776304284768319,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-gnz9r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b74f42e1-1ce4-4f76-9ec3-02dd8cac670b,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5e4329a9fc3e73d04c916c6c138c38afc9a91517c3ff0aab002c52dacab32267,PodSandboxId:19ad1fa9e6755a792162523c6f97b84183ba0e76fd07acdc696b3157e4bbb4db,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Sta
te:CONTAINER_EXITED,CreatedAt:1760776281873655914,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-l59lx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a2dcfcd5-6588-447d-ad3e-eaaab038c8e7,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0a15d23cf01fd5088e31334996e9ad805d3855b8057c93421fbde60c2a7c757,PodSandboxId:7d0efbb7003f622d83d8cc1ec8ff48e98d5eca9d7fa9ed0294654f8058c8c43c,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa939
17a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760776281020157556,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8xqfn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c25a41d8-67dd-486f-ab9f-dc0bfe79728b,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dbadc23b917b9fe4988d111845c35f20a3e465eb20bf7b4c69f248af5becbf2,PodSandboxId:415a5538605405f947a8211fe35b0f22f84a5944c62688020574c0038972a0c1,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38d
ca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760776279318981912,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-59848,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 4a4830ab-b635-43b5-9719-3dfef197e8df,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22b1319adb326ecaa7e01f7d0ccf465f0d3c3d0f22fdcda9169f3d6807683c74,PodSandboxId:9bd5d7a4e0ffd7fa4ed76334f75c68c730020a3ef9de6feb896e58b0f2f33623,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760776265141744711,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be332f77-5dc7-40aa-8a6d-236a64de7b4f,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:532d63906634e2eee8ff15c305e3813941d45f30fa86d90b5c415da87fd28e12,PodSandboxId:9bd11505674ae41b9e034a65f5761df7b1c278821473959b03d6343c59396ab6,Metadata:&ContainerMetadata{Name:amd-gpu
-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760776248021697728,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-zvmkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a57ff2db-5f8e-4afd-8617-5fbef4838726,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21f00294168da89e2f376704ddf64a088d762c7b598afa1a8caed1a8cec78d22,PodSandboxId:065033d44b3db3c8d2c9d0dda567344687b1de944df180664f0d2e68b3a4d5ea,
Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760776229195689265,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f479881c-2d44-410a-b7d0-ce788bb1b3eb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22cf9fbb72c975ca2ee788b2352f1ee5d8f81ab9b2b174c7ca1aedeab5887b56,PodSandboxId:671e9b8d39360bd4869f2f1d316aa12a5552883826f8714feecf63733201daa5,Metadata:&Co
ntainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760776217801726189,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-jbtqc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fabdb6f-1da4-471b-8e53-9937a7448559,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54df044239d2cf41a10fec869dbc3f77ef91a6e02272cab06ee7890082f0829c,PodSandboxId:48821fe84a915782a226c2b9fd87acde4b1b1102d6d1e8ac1c8af58e6a94113a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760776217058986006,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s7lh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ac3f80-31fe-451a-b231-a8bc84703255,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c631c83cfd491e7b18be0fe8bd927fe9242066afe840fb38439c7b845561911,PodSandboxId:be778c110a6f704138847eea75d52c11d3af6d6d142204e931fc3835aa7ae60f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760776205324583744,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-493204,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 402cf9716aa5b2b989c8522879696c1e,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPor
t\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de3c88a90a820c00db746e417c06190caa2685ec308ca3c76ce497205fb12278,PodSandboxId:a84fa07f046e34572baa55d5f6fdf96c0829aa78594134f0c2ba4b3fe7bb8f6a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760776205312583462,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-493204,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aa0005f8056b5d27c325e65e02dcc74,},Annotations:map[string]string{io.kubernetes.container.
hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e0d09f4da1cce2204ba53ba293e45ddcddfb54a68e967be7edff1b8455f8609,PodSandboxId:ead723d36a5b1d9c16b0faf83e6af5343436414b48b5a5c6803a2df76a14590a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760776205267770407,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-493204,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 6afc8c47a925a4319943cf14e40744a8,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42a2bce6d242b782851eb6d8f2dcfe576ba38219927a5ed4edc815a0e9ca2426,PodSandboxId:6170e9b358675e1936e1653768e842072fd28f195d04668a328321c4f03a8ef3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760776205284345766,Labels:map[string]string{io.kubernetes.containe
r.name: etcd,io.kubernetes.pod.name: etcd-addons-493204,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6cb7fd13df7aebf1e773407b47aa0ba,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1089c66e-6348-473b-8fb4-461f72df384f name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 08:35:45 addons-493204 crio[818]: time="2025-10-18 08:35:45.208079655Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7336dffe-b232-42ff-98de-8cebe254aa3a name=/runtime.v1.RuntimeService/Version
	Oct 18 08:35:45 addons-493204 crio[818]: time="2025-10-18 08:35:45.208171034Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7336dffe-b232-42ff-98de-8cebe254aa3a name=/runtime.v1.RuntimeService/Version
	Oct 18 08:35:45 addons-493204 crio[818]: time="2025-10-18 08:35:45.216871669Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ab9598be-62fa-468b-958f-26038c436796 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 08:35:45 addons-493204 crio[818]: time="2025-10-18 08:35:45.219799303Z" level=debug msg="Ping https://registry-1.docker.io/v2/ status 401" file="docker/docker_client.go:901"
	Oct 18 08:35:45 addons-493204 crio[818]: time="2025-10-18 08:35:45.219972862Z" level=debug msg="GET https://auth.docker.io/token?scope=repository%3Akicbase%2Fecho-server%3Apull&service=registry.docker.io" file="docker/docker_client.go:861"
	Oct 18 08:35:45 addons-493204 crio[818]: time="2025-10-18 08:35:45.221746395Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760776545221708926,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598024,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ab9598be-62fa-468b-958f-26038c436796 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 08:35:45 addons-493204 crio[818]: time="2025-10-18 08:35:45.222852467Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0926ec17-2e34-4bcb-a074-23f29b1cf106 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 08:35:45 addons-493204 crio[818]: time="2025-10-18 08:35:45.222955088Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0926ec17-2e34-4bcb-a074-23f29b1cf106 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 08:35:45 addons-493204 crio[818]: time="2025-10-18 08:35:45.223554411Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c85f34b014c895af3588a46a4668768f6d23452c23497db46b825e18938391ca,PodSandboxId:68d2ab41d1537adb3359520c4ffdbd579d82c09bf8bd89f71cfddff26869741e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760776401952864069,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4867d583-45c3-4d54-ab34-d50cc052e2ca,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f6038f5e65934728cebef82f4d96fe61d4ec1519256256ae3c8ba16e44e304b,PodSandboxId:4c325e5a526f8d3fb61561211a708b53f33e04ebf3abbcff6d9e918a07d52e9b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760776374531895137,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e555c26b-13ad-4fca-a7c2-7ac393455c96,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acb4b17770ea1a8b7de5f94c70f62d8a325f376f4dc76a5ff73d8ee4e72cf6e2,PodSandboxId:77ce7df9efdbf83311484eb2ba28c5ca33a5a418c7e0cc9120d2b5f7ce025242,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760776304284768319,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-gnz9r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b74f42e1-1ce4-4f76-9ec3-02dd8cac670b,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5e4329a9fc3e73d04c916c6c138c38afc9a91517c3ff0aab002c52dacab32267,PodSandboxId:19ad1fa9e6755a792162523c6f97b84183ba0e76fd07acdc696b3157e4bbb4db,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Sta
te:CONTAINER_EXITED,CreatedAt:1760776281873655914,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-l59lx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a2dcfcd5-6588-447d-ad3e-eaaab038c8e7,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0a15d23cf01fd5088e31334996e9ad805d3855b8057c93421fbde60c2a7c757,PodSandboxId:7d0efbb7003f622d83d8cc1ec8ff48e98d5eca9d7fa9ed0294654f8058c8c43c,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa939
17a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760776281020157556,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8xqfn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c25a41d8-67dd-486f-ab9f-dc0bfe79728b,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dbadc23b917b9fe4988d111845c35f20a3e465eb20bf7b4c69f248af5becbf2,PodSandboxId:415a5538605405f947a8211fe35b0f22f84a5944c62688020574c0038972a0c1,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38d
ca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760776279318981912,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-59848,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 4a4830ab-b635-43b5-9719-3dfef197e8df,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22b1319adb326ecaa7e01f7d0ccf465f0d3c3d0f22fdcda9169f3d6807683c74,PodSandboxId:9bd5d7a4e0ffd7fa4ed76334f75c68c730020a3ef9de6feb896e58b0f2f33623,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760776265141744711,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be332f77-5dc7-40aa-8a6d-236a64de7b4f,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:532d63906634e2eee8ff15c305e3813941d45f30fa86d90b5c415da87fd28e12,PodSandboxId:9bd11505674ae41b9e034a65f5761df7b1c278821473959b03d6343c59396ab6,Metadata:&ContainerMetadata{Name:amd-gpu
-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760776248021697728,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-zvmkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a57ff2db-5f8e-4afd-8617-5fbef4838726,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21f00294168da89e2f376704ddf64a088d762c7b598afa1a8caed1a8cec78d22,PodSandboxId:065033d44b3db3c8d2c9d0dda567344687b1de944df180664f0d2e68b3a4d5ea,
Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760776229195689265,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f479881c-2d44-410a-b7d0-ce788bb1b3eb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22cf9fbb72c975ca2ee788b2352f1ee5d8f81ab9b2b174c7ca1aedeab5887b56,PodSandboxId:671e9b8d39360bd4869f2f1d316aa12a5552883826f8714feecf63733201daa5,Metadata:&Co
ntainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760776217801726189,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-jbtqc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fabdb6f-1da4-471b-8e53-9937a7448559,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54df044239d2cf41a10fec869dbc3f77ef91a6e02272cab06ee7890082f0829c,PodSandboxId:48821fe84a915782a226c2b9fd87acde4b1b1102d6d1e8ac1c8af58e6a94113a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760776217058986006,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s7lh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ac3f80-31fe-451a-b231-a8bc84703255,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c631c83cfd491e7b18be0fe8bd927fe9242066afe840fb38439c7b845561911,PodSandboxId:be778c110a6f704138847eea75d52c11d3af6d6d142204e931fc3835aa7ae60f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760776205324583744,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-493204,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 402cf9716aa5b2b989c8522879696c1e,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPor
t\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de3c88a90a820c00db746e417c06190caa2685ec308ca3c76ce497205fb12278,PodSandboxId:a84fa07f046e34572baa55d5f6fdf96c0829aa78594134f0c2ba4b3fe7bb8f6a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760776205312583462,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-493204,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aa0005f8056b5d27c325e65e02dcc74,},Annotations:map[string]string{io.kubernetes.container.
hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e0d09f4da1cce2204ba53ba293e45ddcddfb54a68e967be7edff1b8455f8609,PodSandboxId:ead723d36a5b1d9c16b0faf83e6af5343436414b48b5a5c6803a2df76a14590a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760776205267770407,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-493204,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 6afc8c47a925a4319943cf14e40744a8,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42a2bce6d242b782851eb6d8f2dcfe576ba38219927a5ed4edc815a0e9ca2426,PodSandboxId:6170e9b358675e1936e1653768e842072fd28f195d04668a328321c4f03a8ef3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760776205284345766,Labels:map[string]string{io.kubernetes.containe
r.name: etcd,io.kubernetes.pod.name: etcd-addons-493204,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6cb7fd13df7aebf1e773407b47aa0ba,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0926ec17-2e34-4bcb-a074-23f29b1cf106 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 08:35:45 addons-493204 crio[818]: time="2025-10-18 08:35:45.257724401Z" level=debug msg="Received container exit code: 0, message: " file="oci/runtime_oci.go:670" id=8204de9e-0811-40c5-8c7e-bba72c2ea5c6 name=/runtime.v1.RuntimeService/ExecSync
	Oct 18 08:35:45 addons-493204 crio[818]: time="2025-10-18 08:35:45.257910264Z" level=debug msg="Response: &ExecSyncResponse{Stdout:[FILTERED],Stderr:[],ExitCode:0,}" file="otel-collector/interceptors.go:74" id=8204de9e-0811-40c5-8c7e-bba72c2ea5c6 name=/runtime.v1.RuntimeService/ExecSync
	Oct 18 08:35:45 addons-493204 crio[818]: time="2025-10-18 08:35:45.272181510Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a6adc76d-3b3a-44ca-9a88-cbcc534e0d15 name=/runtime.v1.RuntimeService/Version
	Oct 18 08:35:45 addons-493204 crio[818]: time="2025-10-18 08:35:45.272273980Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a6adc76d-3b3a-44ca-9a88-cbcc534e0d15 name=/runtime.v1.RuntimeService/Version
	Oct 18 08:35:45 addons-493204 crio[818]: time="2025-10-18 08:35:45.273778818Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b0fa225e-db4d-4867-9742-cf90130704f0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 08:35:45 addons-493204 crio[818]: time="2025-10-18 08:35:45.275763078Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760776545275689654,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598024,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b0fa225e-db4d-4867-9742-cf90130704f0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 08:35:45 addons-493204 crio[818]: time="2025-10-18 08:35:45.276899614Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=95f9575e-adf1-4a64-9933-9f6e4bdd2ac1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 08:35:45 addons-493204 crio[818]: time="2025-10-18 08:35:45.276956148Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=95f9575e-adf1-4a64-9933-9f6e4bdd2ac1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 08:35:45 addons-493204 crio[818]: time="2025-10-18 08:35:45.277334590Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c85f34b014c895af3588a46a4668768f6d23452c23497db46b825e18938391ca,PodSandboxId:68d2ab41d1537adb3359520c4ffdbd579d82c09bf8bd89f71cfddff26869741e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760776401952864069,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4867d583-45c3-4d54-ab34-d50cc052e2ca,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f6038f5e65934728cebef82f4d96fe61d4ec1519256256ae3c8ba16e44e304b,PodSandboxId:4c325e5a526f8d3fb61561211a708b53f33e04ebf3abbcff6d9e918a07d52e9b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760776374531895137,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e555c26b-13ad-4fca-a7c2-7ac393455c96,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acb4b17770ea1a8b7de5f94c70f62d8a325f376f4dc76a5ff73d8ee4e72cf6e2,PodSandboxId:77ce7df9efdbf83311484eb2ba28c5ca33a5a418c7e0cc9120d2b5f7ce025242,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760776304284768319,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-gnz9r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b74f42e1-1ce4-4f76-9ec3-02dd8cac670b,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5e4329a9fc3e73d04c916c6c138c38afc9a91517c3ff0aab002c52dacab32267,PodSandboxId:19ad1fa9e6755a792162523c6f97b84183ba0e76fd07acdc696b3157e4bbb4db,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Sta
te:CONTAINER_EXITED,CreatedAt:1760776281873655914,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-l59lx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a2dcfcd5-6588-447d-ad3e-eaaab038c8e7,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0a15d23cf01fd5088e31334996e9ad805d3855b8057c93421fbde60c2a7c757,PodSandboxId:7d0efbb7003f622d83d8cc1ec8ff48e98d5eca9d7fa9ed0294654f8058c8c43c,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa939
17a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760776281020157556,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8xqfn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c25a41d8-67dd-486f-ab9f-dc0bfe79728b,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dbadc23b917b9fe4988d111845c35f20a3e465eb20bf7b4c69f248af5becbf2,PodSandboxId:415a5538605405f947a8211fe35b0f22f84a5944c62688020574c0038972a0c1,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38d
ca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760776279318981912,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-59848,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 4a4830ab-b635-43b5-9719-3dfef197e8df,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22b1319adb326ecaa7e01f7d0ccf465f0d3c3d0f22fdcda9169f3d6807683c74,PodSandboxId:9bd5d7a4e0ffd7fa4ed76334f75c68c730020a3ef9de6feb896e58b0f2f33623,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760776265141744711,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be332f77-5dc7-40aa-8a6d-236a64de7b4f,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:532d63906634e2eee8ff15c305e3813941d45f30fa86d90b5c415da87fd28e12,PodSandboxId:9bd11505674ae41b9e034a65f5761df7b1c278821473959b03d6343c59396ab6,Metadata:&ContainerMetadata{Name:amd-gpu
-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760776248021697728,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-zvmkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a57ff2db-5f8e-4afd-8617-5fbef4838726,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21f00294168da89e2f376704ddf64a088d762c7b598afa1a8caed1a8cec78d22,PodSandboxId:065033d44b3db3c8d2c9d0dda567344687b1de944df180664f0d2e68b3a4d5ea,
Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760776229195689265,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f479881c-2d44-410a-b7d0-ce788bb1b3eb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22cf9fbb72c975ca2ee788b2352f1ee5d8f81ab9b2b174c7ca1aedeab5887b56,PodSandboxId:671e9b8d39360bd4869f2f1d316aa12a5552883826f8714feecf63733201daa5,Metadata:&Co
ntainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760776217801726189,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-jbtqc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fabdb6f-1da4-471b-8e53-9937a7448559,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54df044239d2cf41a10fec869dbc3f77ef91a6e02272cab06ee7890082f0829c,PodSandboxId:48821fe84a915782a226c2b9fd87acde4b1b1102d6d1e8ac1c8af58e6a94113a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760776217058986006,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s7lh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ac3f80-31fe-451a-b231-a8bc84703255,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c631c83cfd491e7b18be0fe8bd927fe9242066afe840fb38439c7b845561911,PodSandboxId:be778c110a6f704138847eea75d52c11d3af6d6d142204e931fc3835aa7ae60f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760776205324583744,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-493204,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 402cf9716aa5b2b989c8522879696c1e,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPor
t\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de3c88a90a820c00db746e417c06190caa2685ec308ca3c76ce497205fb12278,PodSandboxId:a84fa07f046e34572baa55d5f6fdf96c0829aa78594134f0c2ba4b3fe7bb8f6a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760776205312583462,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-493204,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aa0005f8056b5d27c325e65e02dcc74,},Annotations:map[string]string{io.kubernetes.container.
hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e0d09f4da1cce2204ba53ba293e45ddcddfb54a68e967be7edff1b8455f8609,PodSandboxId:ead723d36a5b1d9c16b0faf83e6af5343436414b48b5a5c6803a2df76a14590a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760776205267770407,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-493204,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 6afc8c47a925a4319943cf14e40744a8,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42a2bce6d242b782851eb6d8f2dcfe576ba38219927a5ed4edc815a0e9ca2426,PodSandboxId:6170e9b358675e1936e1653768e842072fd28f195d04668a328321c4f03a8ef3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760776205284345766,Labels:map[string]string{io.kubernetes.containe
r.name: etcd,io.kubernetes.pod.name: etcd-addons-493204,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6cb7fd13df7aebf1e773407b47aa0ba,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=95f9575e-adf1-4a64-9933-9f6e4bdd2ac1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c85f34b014c89       docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22                              2 minutes ago       Running             nginx                     0                   68d2ab41d1537       nginx
	6f6038f5e6593       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   4c325e5a526f8       busybox
	acb4b17770ea1       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd             4 minutes ago       Running             controller                0                   77ce7df9efdbf       ingress-nginx-controller-675c5ddd98-gnz9r
	5e4329a9fc3e7       08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2                                                             4 minutes ago       Exited              patch                     1                   19ad1fa9e6755       ingress-nginx-admission-patch-l59lx
	c0a15d23cf01f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   4 minutes ago       Exited              create                    0                   7d0efbb7003f6       ingress-nginx-admission-create-8xqfn
	2dbadc23b917b       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb            4 minutes ago       Running             gadget                    0                   415a553860540       gadget-59848
	22b1319adb326       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   9bd5d7a4e0ffd       kube-ingress-dns-minikube
	532d63906634e       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   9bd11505674ae       amd-gpu-device-plugin-zvmkr
	21f00294168da       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   065033d44b3db       storage-provisioner
	22cf9fbb72c97       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             5 minutes ago       Running             coredns                   0                   671e9b8d39360       coredns-66bc5c9577-jbtqc
	54df044239d2c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             5 minutes ago       Running             kube-proxy                0                   48821fe84a915       kube-proxy-s7lh5
	0c631c83cfd49       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             5 minutes ago       Running             kube-apiserver            0                   be778c110a6f7       kube-apiserver-addons-493204
	de3c88a90a820       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             5 minutes ago       Running             kube-scheduler            0                   a84fa07f046e3       kube-scheduler-addons-493204
	42a2bce6d242b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             5 minutes ago       Running             etcd                      0                   6170e9b358675       etcd-addons-493204
	2e0d09f4da1cc       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             5 minutes ago       Running             kube-controller-manager   0                   ead723d36a5b1       kube-controller-manager-addons-493204
	
	
	==> coredns [22cf9fbb72c975ca2ee788b2352f1ee5d8f81ab9b2b174c7ca1aedeab5887b56] <==
	[INFO] 10.244.0.8:34396 - 22945 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000804685s
	[INFO] 10.244.0.8:34396 - 23513 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000219396s
	[INFO] 10.244.0.8:34396 - 13212 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000515478s
	[INFO] 10.244.0.8:34396 - 50245 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000117477s
	[INFO] 10.244.0.8:34396 - 49117 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000399238s
	[INFO] 10.244.0.8:34396 - 58819 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.00011494s
	[INFO] 10.244.0.8:34396 - 62023 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.001273597s
	[INFO] 10.244.0.8:52997 - 45201 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000268658s
	[INFO] 10.244.0.8:52997 - 45498 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000155054s
	[INFO] 10.244.0.8:46839 - 2443 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000087398s
	[INFO] 10.244.0.8:46839 - 2167 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000359649s
	[INFO] 10.244.0.8:37747 - 3865 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000097659s
	[INFO] 10.244.0.8:37747 - 3578 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000188256s
	[INFO] 10.244.0.8:39278 - 55346 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000084344s
	[INFO] 10.244.0.8:39278 - 55125 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000202747s
	[INFO] 10.244.0.23:54687 - 39944 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000632508s
	[INFO] 10.244.0.23:33872 - 59355 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000179747s
	[INFO] 10.244.0.23:55514 - 12660 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000119574s
	[INFO] 10.244.0.23:45558 - 20355 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.001106868s
	[INFO] 10.244.0.23:32915 - 1242 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000108006s
	[INFO] 10.244.0.23:50457 - 20723 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000124037s
	[INFO] 10.244.0.23:52780 - 38128 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001540621s
	[INFO] 10.244.0.23:46390 - 57971 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001669502s
	[INFO] 10.244.0.27:60342 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000420996s
	[INFO] 10.244.0.27:56800 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000180782s
	
	
	==> describe nodes <==
	Name:               addons-493204
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-493204
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=addons-493204
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T08_30_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-493204
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 08:30:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-493204
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 08:35:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 08:33:46 +0000   Sat, 18 Oct 2025 08:30:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 08:33:46 +0000   Sat, 18 Oct 2025 08:30:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 08:33:46 +0000   Sat, 18 Oct 2025 08:30:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 08:33:46 +0000   Sat, 18 Oct 2025 08:30:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.58
	  Hostname:    addons-493204
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	System Info:
	  Machine ID:                 83a766dfa3e941acb9b84ce5492a5b47
	  System UUID:                83a766df-a3e9-41ac-b9b8-4ce5492a5b47
	  Boot ID:                    7ab02e36-75cb-402f-b006-08dd4c1e8620
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m55s
	  default                     hello-world-app-5d498dc89-k2wb6              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  gadget                      gadget-59848                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m21s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-gnz9r    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m20s
	  kube-system                 amd-gpu-device-plugin-zvmkr                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 coredns-66bc5c9577-jbtqc                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m29s
	  kube-system                 etcd-addons-493204                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m34s
	  kube-system                 kube-apiserver-addons-493204                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m35s
	  kube-system                 kube-controller-manager-addons-493204        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m35s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m23s
	  kube-system                 kube-proxy-s7lh5                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m29s
	  kube-system                 kube-scheduler-addons-493204                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m34s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m27s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m41s (x8 over 5m41s)  kubelet          Node addons-493204 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m41s (x8 over 5m41s)  kubelet          Node addons-493204 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m41s (x7 over 5m41s)  kubelet          Node addons-493204 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m34s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m34s                  kubelet          Node addons-493204 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m34s                  kubelet          Node addons-493204 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m34s                  kubelet          Node addons-493204 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m33s                  kubelet          Node addons-493204 status is now: NodeReady
	  Normal  RegisteredNode           5m30s                  node-controller  Node addons-493204 event: Registered Node addons-493204 in Controller
	
	
	==> dmesg <==
	[ +13.309195] kauditd_printk_skb: 49 callbacks suppressed
	[  +9.135919] kauditd_printk_skb: 20 callbacks suppressed
	[Oct18 08:31] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.626494] kauditd_printk_skb: 32 callbacks suppressed
	[  +8.569423] kauditd_printk_skb: 26 callbacks suppressed
	[  +0.988276] kauditd_printk_skb: 104 callbacks suppressed
	[  +0.147270] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.907950] kauditd_printk_skb: 80 callbacks suppressed
	[  +1.025287] kauditd_printk_skb: 86 callbacks suppressed
	[  +4.434316] kauditd_printk_skb: 43 callbacks suppressed
	[  +9.626766] kauditd_printk_skb: 50 callbacks suppressed
	[Oct18 08:32] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.000877] kauditd_printk_skb: 47 callbacks suppressed
	[Oct18 08:33] kauditd_printk_skb: 41 callbacks suppressed
	[  +6.168608] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.783909] kauditd_printk_skb: 38 callbacks suppressed
	[  +2.046811] kauditd_printk_skb: 141 callbacks suppressed
	[  +0.198709] kauditd_printk_skb: 106 callbacks suppressed
	[  +2.338471] kauditd_printk_skb: 124 callbacks suppressed
	[  +0.869530] kauditd_printk_skb: 136 callbacks suppressed
	[  +5.262621] kauditd_printk_skb: 33 callbacks suppressed
	[  +6.222093] kauditd_printk_skb: 22 callbacks suppressed
	[Oct18 08:34] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.900997] kauditd_printk_skb: 61 callbacks suppressed
	[Oct18 08:35] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [42a2bce6d242b782851eb6d8f2dcfe576ba38219927a5ed4edc815a0e9ca2426] <==
	{"level":"info","ts":"2025-10-18T08:31:10.433715Z","caller":"traceutil/trace.go:172","msg":"trace[870369559] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:994; }","duration":"252.735607ms","start":"2025-10-18T08:31:10.180970Z","end":"2025-10-18T08:31:10.433706Z","steps":["trace[870369559] 'agreement among raft nodes before linearized reading'  (duration: 251.76491ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T08:31:38.360513Z","caller":"traceutil/trace.go:172","msg":"trace[426250671] transaction","detail":"{read_only:false; response_revision:1120; number_of_response:1; }","duration":"192.008414ms","start":"2025-10-18T08:31:38.168492Z","end":"2025-10-18T08:31:38.360501Z","steps":["trace[426250671] 'process raft request'  (duration: 191.828108ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T08:31:38.360264Z","caller":"traceutil/trace.go:172","msg":"trace[1877714131] linearizableReadLoop","detail":"{readStateIndex:1151; appliedIndex:1151; }","duration":"180.134767ms","start":"2025-10-18T08:31:38.180021Z","end":"2025-10-18T08:31:38.360156Z","steps":["trace[1877714131] 'read index received'  (duration: 180.129075ms)","trace[1877714131] 'applied index is now lower than readState.Index'  (duration: 4.935µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T08:31:38.361102Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"181.050833ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T08:31:38.361141Z","caller":"traceutil/trace.go:172","msg":"trace[283331604] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1120; }","duration":"181.112271ms","start":"2025-10-18T08:31:38.180017Z","end":"2025-10-18T08:31:38.361129Z","steps":["trace[283331604] 'agreement among raft nodes before linearized reading'  (duration: 181.010511ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T08:31:38.362457Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"171.356053ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T08:31:38.362537Z","caller":"traceutil/trace.go:172","msg":"trace[501433993] transaction","detail":"{read_only:false; response_revision:1121; number_of_response:1; }","duration":"141.227407ms","start":"2025-10-18T08:31:38.221300Z","end":"2025-10-18T08:31:38.362528Z","steps":["trace[501433993] 'process raft request'  (duration: 141.161442ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T08:31:38.362709Z","caller":"traceutil/trace.go:172","msg":"trace[1907753041] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1120; }","duration":"171.732857ms","start":"2025-10-18T08:31:38.190939Z","end":"2025-10-18T08:31:38.362672Z","steps":["trace[1907753041] 'agreement among raft nodes before linearized reading'  (duration: 171.33354ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T08:31:42.829459Z","caller":"traceutil/trace.go:172","msg":"trace[589047891] linearizableReadLoop","detail":"{readStateIndex:1170; appliedIndex:1170; }","duration":"295.523901ms","start":"2025-10-18T08:31:42.533916Z","end":"2025-10-18T08:31:42.829440Z","steps":["trace[589047891] 'read index received'  (duration: 295.515429ms)","trace[589047891] 'applied index is now lower than readState.Index'  (duration: 6.768µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T08:31:42.829873Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"295.939144ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/gcp-auth-certs-create-g5slv\" limit:1 ","response":"range_response_count:1 size:3941"}
	{"level":"info","ts":"2025-10-18T08:31:42.829934Z","caller":"traceutil/trace.go:172","msg":"trace[2121239321] range","detail":"{range_begin:/registry/pods/gcp-auth/gcp-auth-certs-create-g5slv; range_end:; response_count:1; response_revision:1137; }","duration":"296.019938ms","start":"2025-10-18T08:31:42.533905Z","end":"2025-10-18T08:31:42.829925Z","steps":["trace[2121239321] 'agreement among raft nodes before linearized reading'  (duration: 295.774752ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T08:31:42.831562Z","caller":"traceutil/trace.go:172","msg":"trace[900211319] transaction","detail":"{read_only:false; response_revision:1138; number_of_response:1; }","duration":"344.702388ms","start":"2025-10-18T08:31:42.485886Z","end":"2025-10-18T08:31:42.830589Z","steps":["trace[900211319] 'process raft request'  (duration: 344.453928ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T08:31:42.831886Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T08:31:42.485869Z","time spent":"345.817832ms","remote":"127.0.0.1:48620","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3132,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/jobs/gcp-auth/gcp-auth-certs-patch\" mod_revision:876 > success:<request_put:<key:\"/registry/jobs/gcp-auth/gcp-auth-certs-patch\" value_size:3080 >> failure:<request_range:<key:\"/registry/jobs/gcp-auth/gcp-auth-certs-patch\" > >"}
	{"level":"warn","ts":"2025-10-18T08:31:42.839345Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"149.195167ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T08:31:42.839448Z","caller":"traceutil/trace.go:172","msg":"trace[567898590] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1138; }","duration":"149.308368ms","start":"2025-10-18T08:31:42.690130Z","end":"2025-10-18T08:31:42.839438Z","steps":["trace[567898590] 'agreement among raft nodes before linearized reading'  (duration: 149.127117ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T08:31:42.839825Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"158.974679ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T08:31:42.839859Z","caller":"traceutil/trace.go:172","msg":"trace[1923066815] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1138; }","duration":"159.067632ms","start":"2025-10-18T08:31:42.680781Z","end":"2025-10-18T08:31:42.839849Z","steps":["trace[1923066815] 'agreement among raft nodes before linearized reading'  (duration: 158.944418ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T08:31:42.841100Z","caller":"traceutil/trace.go:172","msg":"trace[1036376561] transaction","detail":"{read_only:false; response_revision:1139; number_of_response:1; }","duration":"321.143181ms","start":"2025-10-18T08:31:42.519944Z","end":"2025-10-18T08:31:42.841088Z","steps":["trace[1036376561] 'process raft request'  (duration: 320.845447ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T08:31:42.841324Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T08:31:42.519923Z","time spent":"321.341553ms","remote":"127.0.0.1:48620","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3133,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/jobs/gcp-auth/gcp-auth-certs-create\" mod_revision:859 > success:<request_put:<key:\"/registry/jobs/gcp-auth/gcp-auth-certs-create\" value_size:3080 >> failure:<request_range:<key:\"/registry/jobs/gcp-auth/gcp-auth-certs-create\" > >"}
	{"level":"info","ts":"2025-10-18T08:31:43.398052Z","caller":"traceutil/trace.go:172","msg":"trace[73392705] linearizableReadLoop","detail":"{readStateIndex:1178; appliedIndex:1178; }","duration":"121.526024ms","start":"2025-10-18T08:31:43.276454Z","end":"2025-10-18T08:31:43.397980Z","steps":["trace[73392705] 'read index received'  (duration: 121.516544ms)","trace[73392705] 'applied index is now lower than readState.Index'  (duration: 8.384µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T08:31:43.398126Z","caller":"traceutil/trace.go:172","msg":"trace[340482981] transaction","detail":"{read_only:false; response_revision:1146; number_of_response:1; }","duration":"133.875573ms","start":"2025-10-18T08:31:43.264239Z","end":"2025-10-18T08:31:43.398115Z","steps":["trace[340482981] 'process raft request'  (duration: 133.770932ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T08:31:43.398189Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.719423ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T08:31:43.398213Z","caller":"traceutil/trace.go:172","msg":"trace[2083312273] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1146; }","duration":"121.75943ms","start":"2025-10-18T08:31:43.276448Z","end":"2025-10-18T08:31:43.398207Z","steps":["trace[2083312273] 'agreement among raft nodes before linearized reading'  (duration: 121.684226ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T08:32:20.440111Z","caller":"traceutil/trace.go:172","msg":"trace[1345287872] transaction","detail":"{read_only:false; response_revision:1254; number_of_response:1; }","duration":"146.139008ms","start":"2025-10-18T08:32:20.293948Z","end":"2025-10-18T08:32:20.440087Z","steps":["trace[1345287872] 'process raft request'  (duration: 146.023161ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T08:33:15.383929Z","caller":"traceutil/trace.go:172","msg":"trace[1881570801] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1442; }","duration":"232.654904ms","start":"2025-10-18T08:33:15.151255Z","end":"2025-10-18T08:33:15.383910Z","steps":["trace[1881570801] 'process raft request'  (duration: 232.572231ms)"],"step_count":1}
	
	
	==> kernel <==
	 08:35:45 up 6 min,  0 users,  load average: 0.37, 1.11, 0.65
	Linux addons-493204 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [0c631c83cfd491e7b18be0fe8bd927fe9242066afe840fb38439c7b845561911] <==
	E1018 08:31:08.665752       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.183.151:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.183.151:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.183.151:443: connect: connection refused" logger="UnhandledError"
	E1018 08:31:08.671471       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.183.151:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.183.151:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.183.151:443: connect: connection refused" logger="UnhandledError"
	I1018 08:31:08.738509       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1018 08:33:00.700670       1 conn.go:339] Error on socket receive: read tcp 192.168.39.58:8443->192.168.39.1:52778: use of closed network connection
	E1018 08:33:00.894305       1 conn.go:339] Error on socket receive: read tcp 192.168.39.58:8443->192.168.39.1:52806: use of closed network connection
	I1018 08:33:10.362267       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.237.143"}
	I1018 08:33:16.360322       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1018 08:33:16.603888       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.23.71"}
	I1018 08:33:48.818108       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1018 08:33:51.707129       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1018 08:34:09.716698       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1018 08:34:15.467261       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1018 08:34:15.467541       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1018 08:34:15.508217       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1018 08:34:15.515379       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1018 08:34:15.520461       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1018 08:34:15.520503       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1018 08:34:15.651024       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1018 08:34:15.651090       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1018 08:34:15.666627       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1018 08:34:15.666675       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1018 08:34:16.521775       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1018 08:34:16.666804       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1018 08:34:16.695088       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1018 08:35:43.805703       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.130.228"}
	
	
	==> kube-controller-manager [2e0d09f4da1cce2204ba53ba293e45ddcddfb54a68e967be7edff1b8455f8609] <==
	E1018 08:34:25.103452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 08:34:25.405543       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 08:34:25.406710       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 08:34:33.346787       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 08:34:33.347969       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 08:34:35.039020       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 08:34:35.040108       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 08:34:35.473453       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 08:34:35.474435       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1018 08:34:45.319193       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1018 08:34:45.319547       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 08:34:45.365818       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1018 08:34:45.365985       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1018 08:34:53.132255       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 08:34:53.133366       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 08:34:55.134058       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 08:34:55.135026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 08:34:55.507065       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 08:34:55.508141       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 08:35:32.074092       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 08:35:32.075297       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 08:35:32.466177       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 08:35:32.467210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 08:35:41.372367       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 08:35:41.373563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [54df044239d2cf41a10fec869dbc3f77ef91a6e02272cab06ee7890082f0829c] <==
	I1018 08:30:17.766286       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 08:30:17.866841       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 08:30:17.866889       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.58"]
	E1018 08:30:17.866978       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 08:30:18.173143       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1018 08:30:18.176977       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1018 08:30:18.177377       1 server_linux.go:132] "Using iptables Proxier"
	I1018 08:30:18.238234       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 08:30:18.239547       1 server.go:527] "Version info" version="v1.34.1"
	I1018 08:30:18.239936       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 08:30:18.256680       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 08:30:18.256696       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 08:30:18.257143       1 config.go:200] "Starting service config controller"
	I1018 08:30:18.258656       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 08:30:18.258702       1 config.go:106] "Starting endpoint slice config controller"
	I1018 08:30:18.258706       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 08:30:18.258747       1 config.go:309] "Starting node config controller"
	I1018 08:30:18.258753       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 08:30:18.357790       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 08:30:18.359103       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 08:30:18.359141       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 08:30:18.359268       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [de3c88a90a820c00db746e417c06190caa2685ec308ca3c76ce497205fb12278] <==
	E1018 08:30:08.214098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 08:30:08.214143       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 08:30:08.214199       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 08:30:08.214266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 08:30:08.214274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 08:30:08.214472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 08:30:08.214640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 08:30:09.036645       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 08:30:09.045237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 08:30:09.103721       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 08:30:09.137684       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 08:30:09.154323       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 08:30:09.177579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 08:30:09.177772       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 08:30:09.194710       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 08:30:09.231039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 08:30:09.270530       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 08:30:09.277767       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 08:30:09.322652       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 08:30:09.406768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 08:30:09.420929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 08:30:09.461729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 08:30:09.539897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 08:30:09.570239       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1018 08:30:11.902496       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 08:34:18 addons-493204 kubelet[1507]: I1018 08:34:18.614018    1507 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f35c3a41c155d514f8dd26b267a95808e2497d79d3cd466add9172bb0d6d863"} err="failed to get container status \"4f35c3a41c155d514f8dd26b267a95808e2497d79d3cd466add9172bb0d6d863\": rpc error: code = NotFound desc = could not find container \"4f35c3a41c155d514f8dd26b267a95808e2497d79d3cd466add9172bb0d6d863\": container with ID starting with 4f35c3a41c155d514f8dd26b267a95808e2497d79d3cd466add9172bb0d6d863 not found: ID does not exist"
	Oct 18 08:34:19 addons-493204 kubelet[1507]: I1018 08:34:19.321119    1507 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b485867-25c4-4062-8f3f-1b548be90420" path="/var/lib/kubelet/pods/4b485867-25c4-4062-8f3f-1b548be90420/volumes"
	Oct 18 08:34:19 addons-493204 kubelet[1507]: I1018 08:34:19.321627    1507 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3eddcdd-a8f4-4fda-b965-5a45a9d17939" path="/var/lib/kubelet/pods/b3eddcdd-a8f4-4fda-b965-5a45a9d17939/volumes"
	Oct 18 08:34:19 addons-493204 kubelet[1507]: I1018 08:34:19.322209    1507 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df179927-1992-413c-b1ea-ba90ed82da64" path="/var/lib/kubelet/pods/df179927-1992-413c-b1ea-ba90ed82da64/volumes"
	Oct 18 08:34:21 addons-493204 kubelet[1507]: E1018 08:34:21.809102    1507 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760776461808588054  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598024}  inodes_used:{value:201}}"
	Oct 18 08:34:21 addons-493204 kubelet[1507]: E1018 08:34:21.809132    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760776461808588054  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598024}  inodes_used:{value:201}}"
	Oct 18 08:34:31 addons-493204 kubelet[1507]: E1018 08:34:31.811705    1507 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760776471811127208  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598024}  inodes_used:{value:201}}"
	Oct 18 08:34:31 addons-493204 kubelet[1507]: E1018 08:34:31.811737    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760776471811127208  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598024}  inodes_used:{value:201}}"
	Oct 18 08:34:32 addons-493204 kubelet[1507]: I1018 08:34:32.316675    1507 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-zvmkr" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 08:34:41 addons-493204 kubelet[1507]: E1018 08:34:41.815163    1507 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760776481814705592  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598024}  inodes_used:{value:201}}"
	Oct 18 08:34:41 addons-493204 kubelet[1507]: E1018 08:34:41.815205    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760776481814705592  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598024}  inodes_used:{value:201}}"
	Oct 18 08:34:51 addons-493204 kubelet[1507]: E1018 08:34:51.818757    1507 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760776491818149559  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598024}  inodes_used:{value:201}}"
	Oct 18 08:34:51 addons-493204 kubelet[1507]: E1018 08:34:51.818804    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760776491818149559  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598024}  inodes_used:{value:201}}"
	Oct 18 08:35:01 addons-493204 kubelet[1507]: E1018 08:35:01.821502    1507 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760776501820996981  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598024}  inodes_used:{value:201}}"
	Oct 18 08:35:01 addons-493204 kubelet[1507]: E1018 08:35:01.821539    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760776501820996981  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598024}  inodes_used:{value:201}}"
	Oct 18 08:35:11 addons-493204 kubelet[1507]: E1018 08:35:11.824575    1507 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760776511823965656  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598024}  inodes_used:{value:201}}"
	Oct 18 08:35:11 addons-493204 kubelet[1507]: E1018 08:35:11.824630    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760776511823965656  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598024}  inodes_used:{value:201}}"
	Oct 18 08:35:21 addons-493204 kubelet[1507]: E1018 08:35:21.829314    1507 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760776521827866703  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598024}  inodes_used:{value:201}}"
	Oct 18 08:35:21 addons-493204 kubelet[1507]: E1018 08:35:21.829347    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760776521827866703  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598024}  inodes_used:{value:201}}"
	Oct 18 08:35:22 addons-493204 kubelet[1507]: I1018 08:35:22.317062    1507 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 08:35:31 addons-493204 kubelet[1507]: E1018 08:35:31.832657    1507 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760776531831759350  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598024}  inodes_used:{value:201}}"
	Oct 18 08:35:31 addons-493204 kubelet[1507]: E1018 08:35:31.832693    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760776531831759350  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598024}  inodes_used:{value:201}}"
	Oct 18 08:35:41 addons-493204 kubelet[1507]: E1018 08:35:41.835686    1507 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760776541835138506  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598024}  inodes_used:{value:201}}"
	Oct 18 08:35:41 addons-493204 kubelet[1507]: E1018 08:35:41.835713    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760776541835138506  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598024}  inodes_used:{value:201}}"
	Oct 18 08:35:43 addons-493204 kubelet[1507]: I1018 08:35:43.839840    1507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlw2r\" (UniqueName: \"kubernetes.io/projected/4c5ce290-55b1-44f7-baa1-d24b9b748a9d-kube-api-access-qlw2r\") pod \"hello-world-app-5d498dc89-k2wb6\" (UID: \"4c5ce290-55b1-44f7-baa1-d24b9b748a9d\") " pod="default/hello-world-app-5d498dc89-k2wb6"
	
	
	==> storage-provisioner [21f00294168da89e2f376704ddf64a088d762c7b598afa1a8caed1a8cec78d22] <==
	W1018 08:35:20.968533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:22.972983       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:22.981738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:24.985382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:24.990506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:26.994132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:27.003487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:29.006717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:29.013093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:31.016937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:31.026246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:33.030192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:33.036479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:35.040904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:35.047238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:37.051789       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:37.058258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:39.062582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:39.069948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:41.074372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:41.080765       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:43.084939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:43.094806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:45.101901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:45.116913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-493204 -n addons-493204
helpers_test.go:269: (dbg) Run:  kubectl --context addons-493204 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-k2wb6 ingress-nginx-admission-create-8xqfn ingress-nginx-admission-patch-l59lx
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-493204 describe pod hello-world-app-5d498dc89-k2wb6 ingress-nginx-admission-create-8xqfn ingress-nginx-admission-patch-l59lx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-493204 describe pod hello-world-app-5d498dc89-k2wb6 ingress-nginx-admission-create-8xqfn ingress-nginx-admission-patch-l59lx: exit status 1 (71.491674ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-k2wb6
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-493204/192.168.39.58
	Start Time:       Sat, 18 Oct 2025 08:35:43 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qlw2r (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qlw2r:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-k2wb6 to addons-493204
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	  Normal  Pulled     0s    kubelet            Successfully pulled image "docker.io/kicbase/echo-server:1.0" in 1.966s (1.966s including waiting). Image size: 4944818 bytes.
	  Normal  Created    0s    kubelet            Created container: hello-world-app
	  Normal  Started    0s    kubelet            Started container hello-world-app

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-8xqfn" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-l59lx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-493204 describe pod hello-world-app-5d498dc89-k2wb6 ingress-nginx-admission-create-8xqfn ingress-nginx-admission-patch-l59lx: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-493204 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-493204 addons disable ingress-dns --alsologtostderr -v=1: (1.57089524s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-493204 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-493204 addons disable ingress --alsologtostderr -v=1: (7.853695065s)
--- FAIL: TestAddons/parallel/Ingress (159.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-679071 ssh pgrep buildkitd: exit status 1 (194.85194ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 image build -t localhost/my-image:functional-679071 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-679071 image build -t localhost/my-image:functional-679071 testdata/build --alsologtostderr: (4.189297543s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-679071 image build -t localhost/my-image:functional-679071 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 970365864ee
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-679071
--> 692232162aa
Successfully tagged localhost/my-image:functional-679071
692232162aa27f6ff959af4eaef4a29d82edbd6c5df306fcfe798503bd13f9da
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-679071 image build -t localhost/my-image:functional-679071 testdata/build --alsologtostderr:
I1018 08:49:15.078493   20254 out.go:360] Setting OutFile to fd 1 ...
I1018 08:49:15.078937   20254 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 08:49:15.078952   20254 out.go:374] Setting ErrFile to fd 2...
I1018 08:49:15.078958   20254 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 08:49:15.079278   20254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-6063/.minikube/bin
I1018 08:49:15.079846   20254 config.go:182] Loaded profile config "functional-679071": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 08:49:15.080639   20254 config.go:182] Loaded profile config "functional-679071": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 08:49:15.081045   20254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 08:49:15.081099   20254 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 08:49:15.095166   20254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44285
I1018 08:49:15.095701   20254 main.go:141] libmachine: () Calling .GetVersion
I1018 08:49:15.096245   20254 main.go:141] libmachine: Using API Version  1
I1018 08:49:15.096281   20254 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 08:49:15.096721   20254 main.go:141] libmachine: () Calling .GetMachineName
I1018 08:49:15.096969   20254 main.go:141] libmachine: (functional-679071) Calling .GetState
I1018 08:49:15.099378   20254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 08:49:15.099427   20254 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 08:49:15.113400   20254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46137
I1018 08:49:15.113883   20254 main.go:141] libmachine: () Calling .GetVersion
I1018 08:49:15.114367   20254 main.go:141] libmachine: Using API Version  1
I1018 08:49:15.114388   20254 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 08:49:15.114721   20254 main.go:141] libmachine: () Calling .GetMachineName
I1018 08:49:15.114899   20254 main.go:141] libmachine: (functional-679071) Calling .DriverName
I1018 08:49:15.115121   20254 ssh_runner.go:195] Run: systemctl --version
I1018 08:49:15.115149   20254 main.go:141] libmachine: (functional-679071) Calling .GetSSHHostname
I1018 08:49:15.118020   20254 main.go:141] libmachine: (functional-679071) DBG | domain functional-679071 has defined MAC address 52:54:00:32:cf:ae in network mk-functional-679071
I1018 08:49:15.118462   20254 main.go:141] libmachine: (functional-679071) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:cf:ae", ip: ""} in network mk-functional-679071: {Iface:virbr1 ExpiryTime:2025-10-18 09:38:26 +0000 UTC Type:0 Mac:52:54:00:32:cf:ae Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:functional-679071 Clientid:01:52:54:00:32:cf:ae}
I1018 08:49:15.118493   20254 main.go:141] libmachine: (functional-679071) DBG | domain functional-679071 has defined IP address 192.168.39.157 and MAC address 52:54:00:32:cf:ae in network mk-functional-679071
I1018 08:49:15.118719   20254 main.go:141] libmachine: (functional-679071) Calling .GetSSHPort
I1018 08:49:15.118947   20254 main.go:141] libmachine: (functional-679071) Calling .GetSSHKeyPath
I1018 08:49:15.119139   20254 main.go:141] libmachine: (functional-679071) Calling .GetSSHUsername
I1018 08:49:15.119358   20254 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/functional-679071/id_rsa Username:docker}
I1018 08:49:15.201270   20254 build_images.go:161] Building image from path: /tmp/build.1165452086.tar
I1018 08:49:15.201342   20254 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1018 08:49:15.214275   20254 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1165452086.tar
I1018 08:49:15.219533   20254 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1165452086.tar: stat -c "%s %y" /var/lib/minikube/build/build.1165452086.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1165452086.tar': No such file or directory
I1018 08:49:15.219573   20254 ssh_runner.go:362] scp /tmp/build.1165452086.tar --> /var/lib/minikube/build/build.1165452086.tar (3072 bytes)
I1018 08:49:15.253300   20254 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1165452086
I1018 08:49:15.266859   20254 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1165452086 -xf /var/lib/minikube/build/build.1165452086.tar
I1018 08:49:15.279160   20254 crio.go:315] Building image: /var/lib/minikube/build/build.1165452086
I1018 08:49:15.279229   20254 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-679071 /var/lib/minikube/build/build.1165452086 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1018 08:49:19.170331   20254 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-679071 /var/lib/minikube/build/build.1165452086 --cgroup-manager=cgroupfs: (3.891071891s)
I1018 08:49:19.170417   20254 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1165452086
I1018 08:49:19.196161   20254 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1165452086.tar
I1018 08:49:19.216609   20254 build_images.go:217] Built localhost/my-image:functional-679071 from /tmp/build.1165452086.tar
I1018 08:49:19.216665   20254 build_images.go:133] succeeded building to: functional-679071
I1018 08:49:19.216672   20254 build_images.go:134] failed building to: 
I1018 08:49:19.216747   20254 main.go:141] libmachine: Making call to close driver server
I1018 08:49:19.216772   20254 main.go:141] libmachine: (functional-679071) Calling .Close
I1018 08:49:19.217132   20254 main.go:141] libmachine: Successfully made call to close driver server
I1018 08:49:19.217151   20254 main.go:141] libmachine: Making call to close connection to plugin binary
I1018 08:49:19.217161   20254 main.go:141] libmachine: Making call to close driver server
I1018 08:49:19.217169   20254 main.go:141] libmachine: (functional-679071) Calling .Close
I1018 08:49:19.217455   20254 main.go:141] libmachine: Successfully made call to close driver server
I1018 08:49:19.217493   20254 main.go:141] libmachine: Making call to close connection to plugin binary
I1018 08:49:19.219132   20254 main.go:141] libmachine: (functional-679071) DBG | Closing plugin on server side
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 image ls
functional_test.go:466: (dbg) Done: out/minikube-linux-amd64 -p functional-679071 image ls: (2.315721921s)
functional_test.go:461: expected "localhost/my-image:functional-679071" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (6.70s)

                                                
                                    
x
+
TestPreload (165.07s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-279124 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-279124 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0: (1m42.562299886s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-279124 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-279124 image pull gcr.io/k8s-minikube/busybox: (3.722157701s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-279124
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-279124: (6.879400818s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-279124 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-279124 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (48.819777136s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-279124 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-10-18 09:31:27.101882773 +0000 UTC m=+3726.744323832
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-279124 -n test-preload-279124
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-279124 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-279124 logs -n 25: (1.168882951s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                        ARGS                                                                                         │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-407105 ssh -n multinode-407105-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-407105     │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ ssh     │ multinode-407105 ssh -n multinode-407105 sudo cat /home/docker/cp-test_multinode-407105-m03_multinode-407105.txt                                                                    │ multinode-407105     │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ cp      │ multinode-407105 cp multinode-407105-m03:/home/docker/cp-test.txt multinode-407105-m02:/home/docker/cp-test_multinode-407105-m03_multinode-407105-m02.txt                           │ multinode-407105     │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ ssh     │ multinode-407105 ssh -n multinode-407105-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-407105     │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ ssh     │ multinode-407105 ssh -n multinode-407105-m02 sudo cat /home/docker/cp-test_multinode-407105-m03_multinode-407105-m02.txt                                                            │ multinode-407105     │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ node    │ multinode-407105 node stop m03                                                                                                                                                      │ multinode-407105     │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ node    │ multinode-407105 node start m03 -v=5 --alsologtostderr                                                                                                                              │ multinode-407105     │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:18 UTC │
	│ node    │ list -p multinode-407105                                                                                                                                                            │ multinode-407105     │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │                     │
	│ stop    │ -p multinode-407105                                                                                                                                                                 │ multinode-407105     │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │ 18 Oct 25 09:20 UTC │
	│ start   │ -p multinode-407105 --wait=true -v=5 --alsologtostderr                                                                                                                              │ multinode-407105     │ jenkins │ v1.37.0 │ 18 Oct 25 09:20 UTC │ 18 Oct 25 09:23 UTC │
	│ node    │ list -p multinode-407105                                                                                                                                                            │ multinode-407105     │ jenkins │ v1.37.0 │ 18 Oct 25 09:23 UTC │                     │
	│ node    │ multinode-407105 node delete m03                                                                                                                                                    │ multinode-407105     │ jenkins │ v1.37.0 │ 18 Oct 25 09:23 UTC │ 18 Oct 25 09:23 UTC │
	│ stop    │ multinode-407105 stop                                                                                                                                                               │ multinode-407105     │ jenkins │ v1.37.0 │ 18 Oct 25 09:23 UTC │ 18 Oct 25 09:26 UTC │
	│ start   │ -p multinode-407105 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                          │ multinode-407105     │ jenkins │ v1.37.0 │ 18 Oct 25 09:26 UTC │ 18 Oct 25 09:27 UTC │
	│ node    │ list -p multinode-407105                                                                                                                                                            │ multinode-407105     │ jenkins │ v1.37.0 │ 18 Oct 25 09:28 UTC │                     │
	│ start   │ -p multinode-407105-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-407105-m02 │ jenkins │ v1.37.0 │ 18 Oct 25 09:28 UTC │                     │
	│ start   │ -p multinode-407105-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-407105-m03 │ jenkins │ v1.37.0 │ 18 Oct 25 09:28 UTC │ 18 Oct 25 09:28 UTC │
	│ node    │ add -p multinode-407105                                                                                                                                                             │ multinode-407105     │ jenkins │ v1.37.0 │ 18 Oct 25 09:28 UTC │                     │
	│ delete  │ -p multinode-407105-m03                                                                                                                                                             │ multinode-407105-m03 │ jenkins │ v1.37.0 │ 18 Oct 25 09:28 UTC │ 18 Oct 25 09:28 UTC │
	│ delete  │ -p multinode-407105                                                                                                                                                                 │ multinode-407105     │ jenkins │ v1.37.0 │ 18 Oct 25 09:28 UTC │ 18 Oct 25 09:28 UTC │
	│ start   │ -p test-preload-279124 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0 │ test-preload-279124  │ jenkins │ v1.37.0 │ 18 Oct 25 09:28 UTC │ 18 Oct 25 09:30 UTC │
	│ image   │ test-preload-279124 image pull gcr.io/k8s-minikube/busybox                                                                                                                          │ test-preload-279124  │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:30 UTC │
	│ stop    │ -p test-preload-279124                                                                                                                                                              │ test-preload-279124  │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:30 UTC │
	│ start   │ -p test-preload-279124 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                         │ test-preload-279124  │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:31 UTC │
	│ image   │ test-preload-279124 image list                                                                                                                                                      │ test-preload-279124  │ jenkins │ v1.37.0 │ 18 Oct 25 09:31 UTC │ 18 Oct 25 09:31 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:30:38
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:30:38.100192   42385 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:30:38.101655   42385 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:30:38.101678   42385 out.go:374] Setting ErrFile to fd 2...
	I1018 09:30:38.101685   42385 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:30:38.101991   42385 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-6063/.minikube/bin
	I1018 09:30:38.102583   42385 out.go:368] Setting JSON to false
	I1018 09:30:38.103498   42385 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4388,"bootTime":1760775450,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:30:38.103597   42385 start.go:141] virtualization: kvm guest
	I1018 09:30:38.105761   42385 out.go:179] * [test-preload-279124] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:30:38.107271   42385 notify.go:220] Checking for updates...
	I1018 09:30:38.107337   42385 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 09:30:38.109130   42385 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:30:38.110724   42385 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-6063/kubeconfig
	I1018 09:30:38.112325   42385 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-6063/.minikube
	I1018 09:30:38.113789   42385 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:30:38.115318   42385 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:30:38.117249   42385 config.go:182] Loaded profile config "test-preload-279124": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1018 09:30:38.117758   42385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:30:38.117829   42385 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:30:38.131802   42385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36079
	I1018 09:30:38.132373   42385 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:30:38.132902   42385 main.go:141] libmachine: Using API Version  1
	I1018 09:30:38.132938   42385 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:30:38.133325   42385 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:30:38.133536   42385 main.go:141] libmachine: (test-preload-279124) Calling .DriverName
	I1018 09:30:38.135945   42385 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1018 09:30:38.137495   42385 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:30:38.137882   42385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:30:38.137950   42385 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:30:38.151632   42385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42635
	I1018 09:30:38.152168   42385 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:30:38.152703   42385 main.go:141] libmachine: Using API Version  1
	I1018 09:30:38.152728   42385 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:30:38.153167   42385 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:30:38.153387   42385 main.go:141] libmachine: (test-preload-279124) Calling .DriverName
	I1018 09:30:38.188651   42385 out.go:179] * Using the kvm2 driver based on existing profile
	I1018 09:30:38.190661   42385 start.go:305] selected driver: kvm2
	I1018 09:30:38.190685   42385 start.go:925] validating driver "kvm2" against &{Name:test-preload-279124 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-279124 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:30:38.190812   42385 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:30:38.191568   42385 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:30:38.191643   42385 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21767-6063/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 09:30:38.206848   42385 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 09:30:38.206876   42385 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21767-6063/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 09:30:38.221550   42385 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 09:30:38.222005   42385 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:30:38.222044   42385 cni.go:84] Creating CNI manager for ""
	I1018 09:30:38.222098   42385 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 09:30:38.222158   42385 start.go:349] cluster config:
	{Name:test-preload-279124 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-279124 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:30:38.222302   42385 iso.go:125] acquiring lock: {Name:mk5e486e8f05c541fb7f7e8ec869cafc091f385a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:30:38.225468   42385 out.go:179] * Starting "test-preload-279124" primary control-plane node in "test-preload-279124" cluster
	I1018 09:30:38.226942   42385 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1018 09:30:38.252357   42385 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1018 09:30:38.252387   42385 cache.go:58] Caching tarball of preloaded images
	I1018 09:30:38.252598   42385 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1018 09:30:38.254629   42385 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1018 09:30:38.256244   42385 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1018 09:30:38.279550   42385 preload.go:290] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1018 09:30:38.279600   42385 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21767-6063/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1018 09:30:41.178890   42385 cache.go:61] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1018 09:30:41.179066   42385 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/test-preload-279124/config.json ...
	I1018 09:30:41.179303   42385 start.go:360] acquireMachinesLock for test-preload-279124: {Name:mk264c321ec76ef9ad1eaece53fae2e5807c459a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1018 09:30:41.179366   42385 start.go:364] duration metric: took 40.491µs to acquireMachinesLock for "test-preload-279124"
	I1018 09:30:41.179380   42385 start.go:96] Skipping create...Using existing machine configuration
	I1018 09:30:41.179386   42385 fix.go:54] fixHost starting: 
	I1018 09:30:41.179641   42385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:30:41.179674   42385 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:30:41.193327   42385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45211
	I1018 09:30:41.193893   42385 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:30:41.194479   42385 main.go:141] libmachine: Using API Version  1
	I1018 09:30:41.194496   42385 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:30:41.194832   42385 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:30:41.195043   42385 main.go:141] libmachine: (test-preload-279124) Calling .DriverName
	I1018 09:30:41.195202   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetState
	I1018 09:30:41.196978   42385 fix.go:112] recreateIfNeeded on test-preload-279124: state=Stopped err=<nil>
	I1018 09:30:41.197031   42385 main.go:141] libmachine: (test-preload-279124) Calling .DriverName
	W1018 09:30:41.197183   42385 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 09:30:41.200273   42385 out.go:252] * Restarting existing kvm2 VM for "test-preload-279124" ...
	I1018 09:30:41.200309   42385 main.go:141] libmachine: (test-preload-279124) Calling .Start
	I1018 09:30:41.200559   42385 main.go:141] libmachine: (test-preload-279124) starting domain...
	I1018 09:30:41.200581   42385 main.go:141] libmachine: (test-preload-279124) ensuring networks are active...
	I1018 09:30:41.201600   42385 main.go:141] libmachine: (test-preload-279124) Ensuring network default is active
	I1018 09:30:41.202066   42385 main.go:141] libmachine: (test-preload-279124) Ensuring network mk-test-preload-279124 is active
	I1018 09:30:41.202464   42385 main.go:141] libmachine: (test-preload-279124) getting domain XML...
	I1018 09:30:41.203730   42385 main.go:141] libmachine: (test-preload-279124) DBG | starting domain XML:
	I1018 09:30:41.203756   42385 main.go:141] libmachine: (test-preload-279124) DBG | <domain type='kvm'>
	I1018 09:30:41.203767   42385 main.go:141] libmachine: (test-preload-279124) DBG |   <name>test-preload-279124</name>
	I1018 09:30:41.203777   42385 main.go:141] libmachine: (test-preload-279124) DBG |   <uuid>a66a8098-790f-4c97-b226-7e19429e092f</uuid>
	I1018 09:30:41.203786   42385 main.go:141] libmachine: (test-preload-279124) DBG |   <memory unit='KiB'>3145728</memory>
	I1018 09:30:41.203798   42385 main.go:141] libmachine: (test-preload-279124) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1018 09:30:41.203805   42385 main.go:141] libmachine: (test-preload-279124) DBG |   <vcpu placement='static'>2</vcpu>
	I1018 09:30:41.203810   42385 main.go:141] libmachine: (test-preload-279124) DBG |   <os>
	I1018 09:30:41.203817   42385 main.go:141] libmachine: (test-preload-279124) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1018 09:30:41.203821   42385 main.go:141] libmachine: (test-preload-279124) DBG |     <boot dev='cdrom'/>
	I1018 09:30:41.203830   42385 main.go:141] libmachine: (test-preload-279124) DBG |     <boot dev='hd'/>
	I1018 09:30:41.203838   42385 main.go:141] libmachine: (test-preload-279124) DBG |     <bootmenu enable='no'/>
	I1018 09:30:41.203842   42385 main.go:141] libmachine: (test-preload-279124) DBG |   </os>
	I1018 09:30:41.203853   42385 main.go:141] libmachine: (test-preload-279124) DBG |   <features>
	I1018 09:30:41.203858   42385 main.go:141] libmachine: (test-preload-279124) DBG |     <acpi/>
	I1018 09:30:41.203865   42385 main.go:141] libmachine: (test-preload-279124) DBG |     <apic/>
	I1018 09:30:41.203879   42385 main.go:141] libmachine: (test-preload-279124) DBG |     <pae/>
	I1018 09:30:41.203886   42385 main.go:141] libmachine: (test-preload-279124) DBG |   </features>
	I1018 09:30:41.203894   42385 main.go:141] libmachine: (test-preload-279124) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1018 09:30:41.203898   42385 main.go:141] libmachine: (test-preload-279124) DBG |   <clock offset='utc'/>
	I1018 09:30:41.203903   42385 main.go:141] libmachine: (test-preload-279124) DBG |   <on_poweroff>destroy</on_poweroff>
	I1018 09:30:41.203909   42385 main.go:141] libmachine: (test-preload-279124) DBG |   <on_reboot>restart</on_reboot>
	I1018 09:30:41.203914   42385 main.go:141] libmachine: (test-preload-279124) DBG |   <on_crash>destroy</on_crash>
	I1018 09:30:41.203932   42385 main.go:141] libmachine: (test-preload-279124) DBG |   <devices>
	I1018 09:30:41.203966   42385 main.go:141] libmachine: (test-preload-279124) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1018 09:30:41.204030   42385 main.go:141] libmachine: (test-preload-279124) DBG |     <disk type='file' device='cdrom'>
	I1018 09:30:41.204049   42385 main.go:141] libmachine: (test-preload-279124) DBG |       <driver name='qemu' type='raw'/>
	I1018 09:30:41.204062   42385 main.go:141] libmachine: (test-preload-279124) DBG |       <source file='/home/jenkins/minikube-integration/21767-6063/.minikube/machines/test-preload-279124/boot2docker.iso'/>
	I1018 09:30:41.204075   42385 main.go:141] libmachine: (test-preload-279124) DBG |       <target dev='hdc' bus='scsi'/>
	I1018 09:30:41.204084   42385 main.go:141] libmachine: (test-preload-279124) DBG |       <readonly/>
	I1018 09:30:41.204096   42385 main.go:141] libmachine: (test-preload-279124) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1018 09:30:41.204107   42385 main.go:141] libmachine: (test-preload-279124) DBG |     </disk>
	I1018 09:30:41.204116   42385 main.go:141] libmachine: (test-preload-279124) DBG |     <disk type='file' device='disk'>
	I1018 09:30:41.204131   42385 main.go:141] libmachine: (test-preload-279124) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1018 09:30:41.204150   42385 main.go:141] libmachine: (test-preload-279124) DBG |       <source file='/home/jenkins/minikube-integration/21767-6063/.minikube/machines/test-preload-279124/test-preload-279124.rawdisk'/>
	I1018 09:30:41.204167   42385 main.go:141] libmachine: (test-preload-279124) DBG |       <target dev='hda' bus='virtio'/>
	I1018 09:30:41.204252   42385 main.go:141] libmachine: (test-preload-279124) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1018 09:30:41.204283   42385 main.go:141] libmachine: (test-preload-279124) DBG |     </disk>
	I1018 09:30:41.204297   42385 main.go:141] libmachine: (test-preload-279124) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1018 09:30:41.204321   42385 main.go:141] libmachine: (test-preload-279124) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1018 09:30:41.204338   42385 main.go:141] libmachine: (test-preload-279124) DBG |     </controller>
	I1018 09:30:41.204350   42385 main.go:141] libmachine: (test-preload-279124) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1018 09:30:41.204360   42385 main.go:141] libmachine: (test-preload-279124) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1018 09:30:41.204372   42385 main.go:141] libmachine: (test-preload-279124) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1018 09:30:41.204385   42385 main.go:141] libmachine: (test-preload-279124) DBG |     </controller>
	I1018 09:30:41.204398   42385 main.go:141] libmachine: (test-preload-279124) DBG |     <interface type='network'>
	I1018 09:30:41.204415   42385 main.go:141] libmachine: (test-preload-279124) DBG |       <mac address='52:54:00:c9:c4:40'/>
	I1018 09:30:41.204429   42385 main.go:141] libmachine: (test-preload-279124) DBG |       <source network='mk-test-preload-279124'/>
	I1018 09:30:41.204439   42385 main.go:141] libmachine: (test-preload-279124) DBG |       <model type='virtio'/>
	I1018 09:30:41.204453   42385 main.go:141] libmachine: (test-preload-279124) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1018 09:30:41.204465   42385 main.go:141] libmachine: (test-preload-279124) DBG |     </interface>
	I1018 09:30:41.204480   42385 main.go:141] libmachine: (test-preload-279124) DBG |     <interface type='network'>
	I1018 09:30:41.204493   42385 main.go:141] libmachine: (test-preload-279124) DBG |       <mac address='52:54:00:d1:24:c7'/>
	I1018 09:30:41.204505   42385 main.go:141] libmachine: (test-preload-279124) DBG |       <source network='default'/>
	I1018 09:30:41.204512   42385 main.go:141] libmachine: (test-preload-279124) DBG |       <model type='virtio'/>
	I1018 09:30:41.204517   42385 main.go:141] libmachine: (test-preload-279124) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1018 09:30:41.204525   42385 main.go:141] libmachine: (test-preload-279124) DBG |     </interface>
	I1018 09:30:41.204538   42385 main.go:141] libmachine: (test-preload-279124) DBG |     <serial type='pty'>
	I1018 09:30:41.204553   42385 main.go:141] libmachine: (test-preload-279124) DBG |       <target type='isa-serial' port='0'>
	I1018 09:30:41.204564   42385 main.go:141] libmachine: (test-preload-279124) DBG |         <model name='isa-serial'/>
	I1018 09:30:41.204573   42385 main.go:141] libmachine: (test-preload-279124) DBG |       </target>
	I1018 09:30:41.204582   42385 main.go:141] libmachine: (test-preload-279124) DBG |     </serial>
	I1018 09:30:41.204595   42385 main.go:141] libmachine: (test-preload-279124) DBG |     <console type='pty'>
	I1018 09:30:41.204610   42385 main.go:141] libmachine: (test-preload-279124) DBG |       <target type='serial' port='0'/>
	I1018 09:30:41.204621   42385 main.go:141] libmachine: (test-preload-279124) DBG |     </console>
	I1018 09:30:41.204633   42385 main.go:141] libmachine: (test-preload-279124) DBG |     <input type='mouse' bus='ps2'/>
	I1018 09:30:41.204646   42385 main.go:141] libmachine: (test-preload-279124) DBG |     <input type='keyboard' bus='ps2'/>
	I1018 09:30:41.204657   42385 main.go:141] libmachine: (test-preload-279124) DBG |     <audio id='1' type='none'/>
	I1018 09:30:41.204668   42385 main.go:141] libmachine: (test-preload-279124) DBG |     <memballoon model='virtio'>
	I1018 09:30:41.204687   42385 main.go:141] libmachine: (test-preload-279124) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1018 09:30:41.204699   42385 main.go:141] libmachine: (test-preload-279124) DBG |     </memballoon>
	I1018 09:30:41.204708   42385 main.go:141] libmachine: (test-preload-279124) DBG |     <rng model='virtio'>
	I1018 09:30:41.204719   42385 main.go:141] libmachine: (test-preload-279124) DBG |       <backend model='random'>/dev/random</backend>
	I1018 09:30:41.204732   42385 main.go:141] libmachine: (test-preload-279124) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1018 09:30:41.204744   42385 main.go:141] libmachine: (test-preload-279124) DBG |     </rng>
	I1018 09:30:41.204754   42385 main.go:141] libmachine: (test-preload-279124) DBG |   </devices>
	I1018 09:30:41.204770   42385 main.go:141] libmachine: (test-preload-279124) DBG | </domain>
	I1018 09:30:41.204783   42385 main.go:141] libmachine: (test-preload-279124) DBG | 
	I1018 09:30:42.495963   42385 main.go:141] libmachine: (test-preload-279124) waiting for domain to start...
	I1018 09:30:42.497576   42385 main.go:141] libmachine: (test-preload-279124) domain is now running
	I1018 09:30:42.497605   42385 main.go:141] libmachine: (test-preload-279124) waiting for IP...
	I1018 09:30:42.498494   42385 main.go:141] libmachine: (test-preload-279124) DBG | domain test-preload-279124 has defined MAC address 52:54:00:c9:c4:40 in network mk-test-preload-279124
	I1018 09:30:42.499121   42385 main.go:141] libmachine: (test-preload-279124) found domain IP: 192.168.39.249
	I1018 09:30:42.499151   42385 main.go:141] libmachine: (test-preload-279124) DBG | domain test-preload-279124 has current primary IP address 192.168.39.249 and MAC address 52:54:00:c9:c4:40 in network mk-test-preload-279124
	I1018 09:30:42.499161   42385 main.go:141] libmachine: (test-preload-279124) reserving static IP address...
	I1018 09:30:42.499658   42385 main.go:141] libmachine: (test-preload-279124) DBG | found host DHCP lease matching {name: "test-preload-279124", mac: "52:54:00:c9:c4:40", ip: "192.168.39.249"} in network mk-test-preload-279124: {Iface:virbr1 ExpiryTime:2025-10-18 10:29:00 +0000 UTC Type:0 Mac:52:54:00:c9:c4:40 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:test-preload-279124 Clientid:01:52:54:00:c9:c4:40}
	I1018 09:30:42.499684   42385 main.go:141] libmachine: (test-preload-279124) reserved static IP address 192.168.39.249 for domain test-preload-279124
	I1018 09:30:42.499710   42385 main.go:141] libmachine: (test-preload-279124) DBG | skip adding static IP to network mk-test-preload-279124 - found existing host DHCP lease matching {name: "test-preload-279124", mac: "52:54:00:c9:c4:40", ip: "192.168.39.249"}
	I1018 09:30:42.499731   42385 main.go:141] libmachine: (test-preload-279124) DBG | Getting to WaitForSSH function...
	I1018 09:30:42.499745   42385 main.go:141] libmachine: (test-preload-279124) waiting for SSH...
	I1018 09:30:42.502032   42385 main.go:141] libmachine: (test-preload-279124) DBG | domain test-preload-279124 has defined MAC address 52:54:00:c9:c4:40 in network mk-test-preload-279124
	I1018 09:30:42.502452   42385 main.go:141] libmachine: (test-preload-279124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:40", ip: ""} in network mk-test-preload-279124: {Iface:virbr1 ExpiryTime:2025-10-18 10:29:00 +0000 UTC Type:0 Mac:52:54:00:c9:c4:40 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:test-preload-279124 Clientid:01:52:54:00:c9:c4:40}
	I1018 09:30:42.502484   42385 main.go:141] libmachine: (test-preload-279124) DBG | domain test-preload-279124 has defined IP address 192.168.39.249 and MAC address 52:54:00:c9:c4:40 in network mk-test-preload-279124
	I1018 09:30:42.502635   42385 main.go:141] libmachine: (test-preload-279124) DBG | Using SSH client type: external
	I1018 09:30:42.502659   42385 main.go:141] libmachine: (test-preload-279124) DBG | Using SSH private key: /home/jenkins/minikube-integration/21767-6063/.minikube/machines/test-preload-279124/id_rsa (-rw-------)
	I1018 09:30:42.502702   42385 main.go:141] libmachine: (test-preload-279124) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.249 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21767-6063/.minikube/machines/test-preload-279124/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1018 09:30:42.502712   42385 main.go:141] libmachine: (test-preload-279124) DBG | About to run SSH command:
	I1018 09:30:42.502720   42385 main.go:141] libmachine: (test-preload-279124) DBG | exit 0
	I1018 09:30:52.776585   42385 main.go:141] libmachine: (test-preload-279124) DBG | SSH cmd err, output: exit status 255: 
	I1018 09:30:52.776630   42385 main.go:141] libmachine: (test-preload-279124) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1018 09:30:52.776643   42385 main.go:141] libmachine: (test-preload-279124) DBG | command : exit 0
	I1018 09:30:52.776651   42385 main.go:141] libmachine: (test-preload-279124) DBG | err     : exit status 255
	I1018 09:30:52.776671   42385 main.go:141] libmachine: (test-preload-279124) DBG | output  : 
	I1018 09:30:55.778761   42385 main.go:141] libmachine: (test-preload-279124) DBG | Getting to WaitForSSH function...
	I1018 09:30:55.781612   42385 main.go:141] libmachine: (test-preload-279124) DBG | domain test-preload-279124 has defined MAC address 52:54:00:c9:c4:40 in network mk-test-preload-279124
	I1018 09:30:55.782116   42385 main.go:141] libmachine: (test-preload-279124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:40", ip: ""} in network mk-test-preload-279124: {Iface:virbr1 ExpiryTime:2025-10-18 10:30:52 +0000 UTC Type:0 Mac:52:54:00:c9:c4:40 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:test-preload-279124 Clientid:01:52:54:00:c9:c4:40}
	I1018 09:30:55.782144   42385 main.go:141] libmachine: (test-preload-279124) DBG | domain test-preload-279124 has defined IP address 192.168.39.249 and MAC address 52:54:00:c9:c4:40 in network mk-test-preload-279124
	I1018 09:30:55.782353   42385 main.go:141] libmachine: (test-preload-279124) DBG | Using SSH client type: external
	I1018 09:30:55.782373   42385 main.go:141] libmachine: (test-preload-279124) DBG | Using SSH private key: /home/jenkins/minikube-integration/21767-6063/.minikube/machines/test-preload-279124/id_rsa (-rw-------)
	I1018 09:30:55.782403   42385 main.go:141] libmachine: (test-preload-279124) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.249 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21767-6063/.minikube/machines/test-preload-279124/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1018 09:30:55.782430   42385 main.go:141] libmachine: (test-preload-279124) DBG | About to run SSH command:
	I1018 09:30:55.782451   42385 main.go:141] libmachine: (test-preload-279124) DBG | exit 0
	I1018 09:30:55.914226   42385 main.go:141] libmachine: (test-preload-279124) DBG | SSH cmd err, output: <nil>: 
	I1018 09:30:55.914603   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetConfigRaw
	I1018 09:30:55.915260   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetIP
	I1018 09:30:55.918408   42385 main.go:141] libmachine: (test-preload-279124) DBG | domain test-preload-279124 has defined MAC address 52:54:00:c9:c4:40 in network mk-test-preload-279124
	I1018 09:30:55.918870   42385 main.go:141] libmachine: (test-preload-279124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:40", ip: ""} in network mk-test-preload-279124: {Iface:virbr1 ExpiryTime:2025-10-18 10:30:52 +0000 UTC Type:0 Mac:52:54:00:c9:c4:40 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:test-preload-279124 Clientid:01:52:54:00:c9:c4:40}
	I1018 09:30:55.918897   42385 main.go:141] libmachine: (test-preload-279124) DBG | domain test-preload-279124 has defined IP address 192.168.39.249 and MAC address 52:54:00:c9:c4:40 in network mk-test-preload-279124
	I1018 09:30:55.919224   42385 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/test-preload-279124/config.json ...
	I1018 09:30:55.919460   42385 machine.go:93] provisionDockerMachine start ...
	I1018 09:30:55.919479   42385 main.go:141] libmachine: (test-preload-279124) Calling .DriverName
	I1018 09:30:55.919743   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHHostname
	I1018 09:30:55.922516   42385 main.go:141] libmachine: (test-preload-279124) DBG | domain test-preload-279124 has defined MAC address 52:54:00:c9:c4:40 in network mk-test-preload-279124
	I1018 09:30:55.922859   42385 main.go:141] libmachine: (test-preload-279124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:40", ip: ""} in network mk-test-preload-279124: {Iface:virbr1 ExpiryTime:2025-10-18 10:30:52 +0000 UTC Type:0 Mac:52:54:00:c9:c4:40 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:test-preload-279124 Clientid:01:52:54:00:c9:c4:40}
	I1018 09:30:55.922886   42385 main.go:141] libmachine: (test-preload-279124) DBG | domain test-preload-279124 has defined IP address 192.168.39.249 and MAC address 52:54:00:c9:c4:40 in network mk-test-preload-279124
	I1018 09:30:55.923037   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHPort
	I1018 09:30:55.923246   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHKeyPath
	I1018 09:30:55.923386   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHKeyPath
	I1018 09:30:55.923496   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHUsername
	I1018 09:30:55.923626   42385 main.go:141] libmachine: Using SSH client type: native
	I1018 09:30:55.923840   42385 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I1018 09:30:55.923851   42385 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:30:56.030094   42385 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1018 09:30:56.030129   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetMachineName
	I1018 09:30:56.030410   42385 buildroot.go:166] provisioning hostname "test-preload-279124"
	I1018 09:30:56.030444   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetMachineName
	I1018 09:30:56.030646   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHHostname
	I1018 09:30:56.033865   42385 main.go:141] libmachine: (test-preload-279124) DBG | domain test-preload-279124 has defined MAC address 52:54:00:c9:c4:40 in network mk-test-preload-279124
	I1018 09:30:56.034249   42385 main.go:141] libmachine: (test-preload-279124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:40", ip: ""} in network mk-test-preload-279124: {Iface:virbr1 ExpiryTime:2025-10-18 10:30:52 +0000 UTC Type:0 Mac:52:54:00:c9:c4:40 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:test-preload-279124 Clientid:01:52:54:00:c9:c4:40}
	I1018 09:30:56.034277   42385 main.go:141] libmachine: (test-preload-279124) DBG | domain test-preload-279124 has defined IP address 192.168.39.249 and MAC address 52:54:00:c9:c4:40 in network mk-test-preload-279124
	I1018 09:30:56.034504   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHPort
	I1018 09:30:56.034736   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHKeyPath
	I1018 09:30:56.034896   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHKeyPath
	I1018 09:30:56.035030   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHUsername
	I1018 09:30:56.035234   42385 main.go:141] libmachine: Using SSH client type: native
	I1018 09:30:56.035541   42385 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I1018 09:30:56.035566   42385 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-279124 && echo "test-preload-279124" | sudo tee /etc/hostname
	I1018 09:30:56.157420   42385 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-279124
	
	I1018 09:30:56.157456   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHHostname
	I1018 09:30:56.160750   42385 main.go:141] libmachine: (test-preload-279124) DBG | domain test-preload-279124 has defined MAC address 52:54:00:c9:c4:40 in network mk-test-preload-279124
	I1018 09:30:56.161171   42385 main.go:141] libmachine: (test-preload-279124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:40", ip: ""} in network mk-test-preload-279124: {Iface:virbr1 ExpiryTime:2025-10-18 10:30:52 +0000 UTC Type:0 Mac:52:54:00:c9:c4:40 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:test-preload-279124 Clientid:01:52:54:00:c9:c4:40}
	I1018 09:30:56.161193   42385 main.go:141] libmachine: (test-preload-279124) DBG | domain test-preload-279124 has defined IP address 192.168.39.249 and MAC address 52:54:00:c9:c4:40 in network mk-test-preload-279124
	I1018 09:30:56.161455   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHPort
	I1018 09:30:56.161673   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHKeyPath
	I1018 09:30:56.161854   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHKeyPath
	I1018 09:30:56.162052   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHUsername
	I1018 09:30:56.162257   42385 main.go:141] libmachine: Using SSH client type: native
	I1018 09:30:56.162481   42385 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I1018 09:30:56.162507   42385 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-279124' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-279124/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-279124' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:30:56.277312   42385 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:30:56.277343   42385 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21767-6063/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-6063/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-6063/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-6063/.minikube}
	I1018 09:30:56.277401   42385 buildroot.go:174] setting up certificates
	I1018 09:30:56.277426   42385 provision.go:84] configureAuth start
	I1018 09:30:56.277442   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetMachineName
	I1018 09:30:56.277759   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetIP
	I1018 09:30:56.281137   42385 main.go:141] libmachine: (test-preload-279124) DBG | domain test-preload-279124 has defined MAC address 52:54:00:c9:c4:40 in network mk-test-preload-279124
	I1018 09:30:56.281504   42385 main.go:141] libmachine: (test-preload-279124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:40", ip: ""} in network mk-test-preload-279124: {Iface:virbr1 ExpiryTime:2025-10-18 10:30:52 +0000 UTC Type:0 Mac:52:54:00:c9:c4:40 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:test-preload-279124 Clientid:01:52:54:00:c9:c4:40}
	I1018 09:30:56.281545   42385 main.go:141] libmachine: (test-preload-279124) DBG | domain test-preload-279124 has defined IP address 192.168.39.249 and MAC address 52:54:00:c9:c4:40 in network mk-test-preload-279124
	I1018 09:30:56.281673   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHHostname
	I1018 09:30:56.284045   42385 main.go:141] libmachine: (test-preload-279124) DBG | domain test-preload-279124 has defined MAC address 52:54:00:c9:c4:40 in network mk-test-preload-279124
	I1018 09:30:56.284419   42385 main.go:141] libmachine: (test-preload-279124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:40", ip: ""} in network mk-test-preload-279124: {Iface:virbr1 ExpiryTime:2025-10-18 10:30:52 +0000 UTC Type:0 Mac:52:54:00:c9:c4:40 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:test-preload-279124 Clientid:01:52:54:00:c9:c4:40}
	I1018 09:30:56.284438   42385 main.go:141] libmachine: (test-preload-279124) DBG | domain test-preload-279124 has defined IP address 192.168.39.249 and MAC address 52:54:00:c9:c4:40 in network mk-test-preload-279124
	I1018 09:30:56.284642   42385 provision.go:143] copyHostCerts
	I1018 09:30:56.284710   42385 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-6063/.minikube/ca.pem, removing ...
	I1018 09:30:56.284733   42385 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-6063/.minikube/ca.pem
	I1018 09:30:56.284824   42385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-6063/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-6063/.minikube/ca.pem (1078 bytes)
	I1018 09:30:56.284969   42385 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-6063/.minikube/cert.pem, removing ...
	I1018 09:30:56.284982   42385 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-6063/.minikube/cert.pem
	I1018 09:30:56.285028   42385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-6063/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-6063/.minikube/cert.pem (1123 bytes)
	I1018 09:30:56.285191   42385 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-6063/.minikube/key.pem, removing ...
	I1018 09:30:56.285203   42385 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-6063/.minikube/key.pem
	I1018 09:30:56.285241   42385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-6063/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-6063/.minikube/key.pem (1675 bytes)
	I1018 09:30:56.285312   42385 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-6063/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-6063/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-6063/.minikube/certs/ca-key.pem org=jenkins.test-preload-279124 san=[127.0.0.1 192.168.39.249 localhost minikube test-preload-279124]
	I1018 09:30:56.659044   42385 provision.go:177] copyRemoteCerts
	I1018 09:30:56.659114   42385 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:30:56.659137   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHHostname
	I1018 09:30:56.662069   42385 main.go:141] libmachine: (test-preload-279124) DBG | domain test-preload-279124 has defined MAC address 52:54:00:c9:c4:40 in network mk-test-preload-279124
	I1018 09:30:56.662485   42385 main.go:141] libmachine: (test-preload-279124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:40", ip: ""} in network mk-test-preload-279124: {Iface:virbr1 ExpiryTime:2025-10-18 10:30:52 +0000 UTC Type:0 Mac:52:54:00:c9:c4:40 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:test-preload-279124 Clientid:01:52:54:00:c9:c4:40}
	I1018 09:30:56.662531   42385 main.go:141] libmachine: (test-preload-279124) DBG | domain test-preload-279124 has defined IP address 192.168.39.249 and MAC address 52:54:00:c9:c4:40 in network mk-test-preload-279124
	I1018 09:30:56.662653   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHPort
	I1018 09:30:56.662859   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHKeyPath
	I1018 09:30:56.663032   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHUsername
	I1018 09:30:56.663181   42385 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/test-preload-279124/id_rsa Username:docker}
	I1018 09:30:56.745324   42385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:30:56.775903   42385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1018 09:30:56.805351   42385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:30:56.834932   42385 provision.go:87] duration metric: took 557.479518ms to configureAuth
	I1018 09:30:56.834957   42385 buildroot.go:189] setting minikube options for container-runtime
	I1018 09:30:56.835161   42385 config.go:182] Loaded profile config "test-preload-279124": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1018 09:30:56.835237   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHHostname
	I1018 09:30:56.838224   42385 main.go:141] libmachine: (test-preload-279124) DBG | domain test-preload-279124 has defined MAC address 52:54:00:c9:c4:40 in network mk-test-preload-279124
	I1018 09:30:56.838627   42385 main.go:141] libmachine: (test-preload-279124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:40", ip: ""} in network mk-test-preload-279124: {Iface:virbr1 ExpiryTime:2025-10-18 10:30:52 +0000 UTC Type:0 Mac:52:54:00:c9:c4:40 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:test-preload-279124 Clientid:01:52:54:00:c9:c4:40}
	I1018 09:30:56.838653   42385 main.go:141] libmachine: (test-preload-279124) DBG | domain test-preload-279124 has defined IP address 192.168.39.249 and MAC address 52:54:00:c9:c4:40 in network mk-test-preload-279124
	I1018 09:30:56.838880   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHPort
	I1018 09:30:56.839136   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHKeyPath
	I1018 09:30:56.839358   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHKeyPath
	I1018 09:30:56.839481   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHUsername
	I1018 09:30:56.839654   42385 main.go:141] libmachine: Using SSH client type: native
	I1018 09:30:56.839849   42385 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I1018 09:30:56.839865   42385 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:30:57.084754   42385 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:30:57.084782   42385 machine.go:96] duration metric: took 1.165310763s to provisionDockerMachine
	I1018 09:30:57.084793   42385 start.go:293] postStartSetup for "test-preload-279124" (driver="kvm2")
	I1018 09:30:57.084802   42385 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:30:57.084819   42385 main.go:141] libmachine: (test-preload-279124) Calling .DriverName
	I1018 09:30:57.085201   42385 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:30:57.085254   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHHostname
	I1018 09:30:57.088187   42385 main.go:141] libmachine: (test-preload-279124) DBG | domain test-preload-279124 has defined MAC address 52:54:00:c9:c4:40 in network mk-test-preload-279124
	I1018 09:30:57.088543   42385 main.go:141] libmachine: (test-preload-279124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:40", ip: ""} in network mk-test-preload-279124: {Iface:virbr1 ExpiryTime:2025-10-18 10:30:52 +0000 UTC Type:0 Mac:52:54:00:c9:c4:40 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:test-preload-279124 Clientid:01:52:54:00:c9:c4:40}
	I1018 09:30:57.088572   42385 main.go:141] libmachine: (test-preload-279124) DBG | domain test-preload-279124 has defined IP address 192.168.39.249 and MAC address 52:54:00:c9:c4:40 in network mk-test-preload-279124
	I1018 09:30:57.088702   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHPort
	I1018 09:30:57.088896   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHKeyPath
	I1018 09:30:57.089085   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHUsername
	I1018 09:30:57.089241   42385 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/test-preload-279124/id_rsa Username:docker}
	I1018 09:30:57.172197   42385 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:30:57.177140   42385 info.go:137] Remote host: Buildroot 2025.02
	I1018 09:30:57.177168   42385 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-6063/.minikube/addons for local assets ...
	I1018 09:30:57.177262   42385 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-6063/.minikube/files for local assets ...
	I1018 09:30:57.177402   42385 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-6063/.minikube/files/etc/ssl/certs/99562.pem -> 99562.pem in /etc/ssl/certs
	I1018 09:30:57.177531   42385 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:30:57.189659   42385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/files/etc/ssl/certs/99562.pem --> /etc/ssl/certs/99562.pem (1708 bytes)
	I1018 09:30:57.219694   42385 start.go:296] duration metric: took 134.889127ms for postStartSetup
	I1018 09:30:57.219736   42385 fix.go:56] duration metric: took 16.040349625s for fixHost
	I1018 09:30:57.219759   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHHostname
	I1018 09:30:57.222603   42385 main.go:141] libmachine: (test-preload-279124) DBG | domain test-preload-279124 has defined MAC address 52:54:00:c9:c4:40 in network mk-test-preload-279124
	I1018 09:30:57.222932   42385 main.go:141] libmachine: (test-preload-279124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:40", ip: ""} in network mk-test-preload-279124: {Iface:virbr1 ExpiryTime:2025-10-18 10:30:52 +0000 UTC Type:0 Mac:52:54:00:c9:c4:40 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:test-preload-279124 Clientid:01:52:54:00:c9:c4:40}
	I1018 09:30:57.222964   42385 main.go:141] libmachine: (test-preload-279124) DBG | domain test-preload-279124 has defined IP address 192.168.39.249 and MAC address 52:54:00:c9:c4:40 in network mk-test-preload-279124
	I1018 09:30:57.223151   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHPort
	I1018 09:30:57.223358   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHKeyPath
	I1018 09:30:57.223549   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHKeyPath
	I1018 09:30:57.223678   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHUsername
	I1018 09:30:57.223843   42385 main.go:141] libmachine: Using SSH client type: native
	I1018 09:30:57.224079   42385 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I1018 09:30:57.224091   42385 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1018 09:30:57.328407   42385 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760779857.302376803
	
	I1018 09:30:57.328435   42385 fix.go:216] guest clock: 1760779857.302376803
	I1018 09:30:57.328447   42385 fix.go:229] Guest: 2025-10-18 09:30:57.302376803 +0000 UTC Remote: 2025-10-18 09:30:57.219740406 +0000 UTC m=+19.157637386 (delta=82.636397ms)
	I1018 09:30:57.328505   42385 fix.go:200] guest clock delta is within tolerance: 82.636397ms
	I1018 09:30:57.328515   42385 start.go:83] releasing machines lock for "test-preload-279124", held for 16.149139383s
	I1018 09:30:57.328542   42385 main.go:141] libmachine: (test-preload-279124) Calling .DriverName
	I1018 09:30:57.328810   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetIP
	I1018 09:30:57.331577   42385 main.go:141] libmachine: (test-preload-279124) DBG | domain test-preload-279124 has defined MAC address 52:54:00:c9:c4:40 in network mk-test-preload-279124
	I1018 09:30:57.331935   42385 main.go:141] libmachine: (test-preload-279124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:40", ip: ""} in network mk-test-preload-279124: {Iface:virbr1 ExpiryTime:2025-10-18 10:30:52 +0000 UTC Type:0 Mac:52:54:00:c9:c4:40 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:test-preload-279124 Clientid:01:52:54:00:c9:c4:40}
	I1018 09:30:57.331965   42385 main.go:141] libmachine: (test-preload-279124) DBG | domain test-preload-279124 has defined IP address 192.168.39.249 and MAC address 52:54:00:c9:c4:40 in network mk-test-preload-279124
	I1018 09:30:57.332165   42385 main.go:141] libmachine: (test-preload-279124) Calling .DriverName
	I1018 09:30:57.332661   42385 main.go:141] libmachine: (test-preload-279124) Calling .DriverName
	I1018 09:30:57.332854   42385 main.go:141] libmachine: (test-preload-279124) Calling .DriverName
	I1018 09:30:57.332936   42385 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:30:57.333001   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHHostname
	I1018 09:30:57.333119   42385 ssh_runner.go:195] Run: cat /version.json
	I1018 09:30:57.333143   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHHostname
	I1018 09:30:57.336215   42385 main.go:141] libmachine: (test-preload-279124) DBG | domain test-preload-279124 has defined MAC address 52:54:00:c9:c4:40 in network mk-test-preload-279124
	I1018 09:30:57.336357   42385 main.go:141] libmachine: (test-preload-279124) DBG | domain test-preload-279124 has defined MAC address 52:54:00:c9:c4:40 in network mk-test-preload-279124
	I1018 09:30:57.336655   42385 main.go:141] libmachine: (test-preload-279124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:40", ip: ""} in network mk-test-preload-279124: {Iface:virbr1 ExpiryTime:2025-10-18 10:30:52 +0000 UTC Type:0 Mac:52:54:00:c9:c4:40 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:test-preload-279124 Clientid:01:52:54:00:c9:c4:40}
	I1018 09:30:57.336683   42385 main.go:141] libmachine: (test-preload-279124) DBG | domain test-preload-279124 has defined IP address 192.168.39.249 and MAC address 52:54:00:c9:c4:40 in network mk-test-preload-279124
	I1018 09:30:57.336710   42385 main.go:141] libmachine: (test-preload-279124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:40", ip: ""} in network mk-test-preload-279124: {Iface:virbr1 ExpiryTime:2025-10-18 10:30:52 +0000 UTC Type:0 Mac:52:54:00:c9:c4:40 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:test-preload-279124 Clientid:01:52:54:00:c9:c4:40}
	I1018 09:30:57.336727   42385 main.go:141] libmachine: (test-preload-279124) DBG | domain test-preload-279124 has defined IP address 192.168.39.249 and MAC address 52:54:00:c9:c4:40 in network mk-test-preload-279124
	I1018 09:30:57.336907   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHPort
	I1018 09:30:57.337036   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHPort
	I1018 09:30:57.337129   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHKeyPath
	I1018 09:30:57.337205   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHKeyPath
	I1018 09:30:57.337302   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHUsername
	I1018 09:30:57.337339   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHUsername
	I1018 09:30:57.337500   42385 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/test-preload-279124/id_rsa Username:docker}
	I1018 09:30:57.337539   42385 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/test-preload-279124/id_rsa Username:docker}
	I1018 09:30:57.419785   42385 ssh_runner.go:195] Run: systemctl --version
	I1018 09:30:57.449894   42385 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:30:57.604606   42385 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:30:57.612560   42385 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:30:57.612638   42385 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:30:57.637026   42385 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 09:30:57.637050   42385 start.go:495] detecting cgroup driver to use...
	I1018 09:30:57.637118   42385 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:30:57.661490   42385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:30:57.681198   42385 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:30:57.681252   42385 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:30:57.701470   42385 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:30:57.720323   42385 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:30:57.882692   42385 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:30:58.107392   42385 docker.go:234] disabling docker service ...
	I1018 09:30:58.107471   42385 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:30:58.124770   42385 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:30:58.141753   42385 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:30:58.306464   42385 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:30:58.456618   42385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:30:58.473166   42385 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:30:58.497598   42385 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1018 09:30:58.497664   42385 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:30:58.510889   42385 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 09:30:58.510970   42385 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:30:58.524138   42385 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:30:58.537288   42385 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:30:58.549971   42385 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:30:58.563976   42385 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:30:58.577222   42385 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:30:58.599293   42385 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:30:58.612461   42385 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:30:58.623566   42385 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1018 09:30:58.623644   42385 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1018 09:30:58.644264   42385 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:30:58.656505   42385 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:30:58.797264   42385 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:30:58.913951   42385 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:30:58.914032   42385 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:30:58.919774   42385 start.go:563] Will wait 60s for crictl version
	I1018 09:30:58.919852   42385 ssh_runner.go:195] Run: which crictl
	I1018 09:30:58.924351   42385 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1018 09:30:58.967403   42385 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1018 09:30:58.967503   42385 ssh_runner.go:195] Run: crio --version
	I1018 09:30:58.998418   42385 ssh_runner.go:195] Run: crio --version
	I1018 09:30:59.033441   42385 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1018 09:30:59.035010   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetIP
	I1018 09:30:59.038107   42385 main.go:141] libmachine: (test-preload-279124) DBG | domain test-preload-279124 has defined MAC address 52:54:00:c9:c4:40 in network mk-test-preload-279124
	I1018 09:30:59.038465   42385 main.go:141] libmachine: (test-preload-279124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:40", ip: ""} in network mk-test-preload-279124: {Iface:virbr1 ExpiryTime:2025-10-18 10:30:52 +0000 UTC Type:0 Mac:52:54:00:c9:c4:40 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:test-preload-279124 Clientid:01:52:54:00:c9:c4:40}
	I1018 09:30:59.038498   42385 main.go:141] libmachine: (test-preload-279124) DBG | domain test-preload-279124 has defined IP address 192.168.39.249 and MAC address 52:54:00:c9:c4:40 in network mk-test-preload-279124
	I1018 09:30:59.038764   42385 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1018 09:30:59.043464   42385 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:30:59.059064   42385 kubeadm.go:883] updating cluster {Name:test-preload-279124 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-279124 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:30:59.059195   42385 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1018 09:30:59.059250   42385 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:30:59.102664   42385 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1018 09:30:59.102758   42385 ssh_runner.go:195] Run: which lz4
	I1018 09:30:59.107669   42385 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1018 09:30:59.112585   42385 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1018 09:30:59.112628   42385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1018 09:31:00.660268   42385 crio.go:462] duration metric: took 1.552633193s to copy over tarball
	I1018 09:31:00.660355   42385 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1018 09:31:02.397967   42385 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.737568104s)
	I1018 09:31:02.398005   42385 crio.go:469] duration metric: took 1.73770053s to extract the tarball
	I1018 09:31:02.398015   42385 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1018 09:31:02.439959   42385 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:31:02.484554   42385 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:31:02.484580   42385 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:31:02.484587   42385 kubeadm.go:934] updating node { 192.168.39.249 8443 v1.32.0 crio true true} ...
	I1018 09:31:02.484706   42385 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-279124 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.249
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-279124 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:31:02.484785   42385 ssh_runner.go:195] Run: crio config
	I1018 09:31:02.534990   42385 cni.go:84] Creating CNI manager for ""
	I1018 09:31:02.535031   42385 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 09:31:02.535059   42385 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:31:02.535086   42385 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.249 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-279124 NodeName:test-preload-279124 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.249"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.249 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:31:02.535242   42385 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.249
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-279124"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.249"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.249"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:31:02.535324   42385 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1018 09:31:02.547866   42385 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:31:02.547969   42385 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:31:02.559788   42385 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1018 09:31:02.581293   42385 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:31:02.603335   42385 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1018 09:31:02.628677   42385 ssh_runner.go:195] Run: grep 192.168.39.249	control-plane.minikube.internal$ /etc/hosts
	I1018 09:31:02.633112   42385 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.249	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:31:02.648286   42385 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:31:02.798187   42385 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:31:02.828903   42385 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/test-preload-279124 for IP: 192.168.39.249
	I1018 09:31:02.828948   42385 certs.go:195] generating shared ca certs ...
	I1018 09:31:02.828980   42385 certs.go:227] acquiring lock for ca certs: {Name:mk72b8eadb27773dc6399bddc4b95ee0664cbf67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:02.829169   42385 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-6063/.minikube/ca.key
	I1018 09:31:02.829241   42385 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-6063/.minikube/proxy-client-ca.key
	I1018 09:31:02.829258   42385 certs.go:257] generating profile certs ...
	I1018 09:31:02.829417   42385 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/test-preload-279124/client.key
	I1018 09:31:02.829500   42385 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/test-preload-279124/apiserver.key.808218d7
	I1018 09:31:02.829556   42385 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/test-preload-279124/proxy-client.key
	I1018 09:31:02.829715   42385 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-6063/.minikube/certs/9956.pem (1338 bytes)
	W1018 09:31:02.829760   42385 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-6063/.minikube/certs/9956_empty.pem, impossibly tiny 0 bytes
	I1018 09:31:02.829774   42385 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-6063/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:31:02.829810   42385 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-6063/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:31:02.829856   42385 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-6063/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:31:02.829890   42385 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-6063/.minikube/certs/key.pem (1675 bytes)
	I1018 09:31:02.829987   42385 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-6063/.minikube/files/etc/ssl/certs/99562.pem (1708 bytes)
	I1018 09:31:02.830816   42385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:31:02.877157   42385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 09:31:02.915424   42385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:31:02.945963   42385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:31:02.976705   42385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/test-preload-279124/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1018 09:31:03.007384   42385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/test-preload-279124/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:31:03.038177   42385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/test-preload-279124/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:31:03.068590   42385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/test-preload-279124/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 09:31:03.098917   42385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:31:03.129090   42385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/certs/9956.pem --> /usr/share/ca-certificates/9956.pem (1338 bytes)
	I1018 09:31:03.158947   42385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/files/etc/ssl/certs/99562.pem --> /usr/share/ca-certificates/99562.pem (1708 bytes)
	I1018 09:31:03.188685   42385 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:31:03.209740   42385 ssh_runner.go:195] Run: openssl version
	I1018 09:31:03.216489   42385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9956.pem && ln -fs /usr/share/ca-certificates/9956.pem /etc/ssl/certs/9956.pem"
	I1018 09:31:03.230141   42385 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9956.pem
	I1018 09:31:03.235426   42385 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 08:38 /usr/share/ca-certificates/9956.pem
	I1018 09:31:03.235486   42385 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9956.pem
	I1018 09:31:03.242699   42385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9956.pem /etc/ssl/certs/51391683.0"
	I1018 09:31:03.256350   42385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/99562.pem && ln -fs /usr/share/ca-certificates/99562.pem /etc/ssl/certs/99562.pem"
	I1018 09:31:03.269994   42385 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/99562.pem
	I1018 09:31:03.275115   42385 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 08:38 /usr/share/ca-certificates/99562.pem
	I1018 09:31:03.275185   42385 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/99562.pem
	I1018 09:31:03.282426   42385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/99562.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:31:03.296206   42385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:31:03.310001   42385 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:31:03.315429   42385 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:31:03.315509   42385 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:31:03.323047   42385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:31:03.336823   42385 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:31:03.342476   42385 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 09:31:03.350440   42385 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 09:31:03.358455   42385 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 09:31:03.366809   42385 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 09:31:03.375100   42385 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 09:31:03.382964   42385 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 09:31:03.390778   42385 kubeadm.go:400] StartCluster: {Name:test-preload-279124 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-279124 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:31:03.390855   42385 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:31:03.390934   42385 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:31:03.433091   42385 cri.go:89] found id: ""
	I1018 09:31:03.433164   42385 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:31:03.445742   42385 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 09:31:03.445771   42385 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 09:31:03.445821   42385 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 09:31:03.458146   42385 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:31:03.458544   42385 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-279124" does not appear in /home/jenkins/minikube-integration/21767-6063/kubeconfig
	I1018 09:31:03.458640   42385 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-6063/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-279124" cluster setting kubeconfig missing "test-preload-279124" context setting]
	I1018 09:31:03.458880   42385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-6063/kubeconfig: {Name:mkb340db398364bcc27d468da7444ccfad7b82c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:03.459494   42385 kapi.go:59] client config for test-preload-279124: &rest.Config{Host:"https://192.168.39.249:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21767-6063/.minikube/profiles/test-preload-279124/client.crt", KeyFile:"/home/jenkins/minikube-integration/21767-6063/.minikube/profiles/test-preload-279124/client.key", CAFile:"/home/jenkins/minikube-integration/21767-6063/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 09:31:03.459912   42385 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1018 09:31:03.459950   42385 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1018 09:31:03.459955   42385 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1018 09:31:03.459960   42385 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1018 09:31:03.459965   42385 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1018 09:31:03.460264   42385 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 09:31:03.472415   42385 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.39.249
	I1018 09:31:03.472450   42385 kubeadm.go:1160] stopping kube-system containers ...
	I1018 09:31:03.472462   42385 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1018 09:31:03.472516   42385 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:31:03.513957   42385 cri.go:89] found id: ""
	I1018 09:31:03.514030   42385 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1018 09:31:03.538462   42385 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 09:31:03.550892   42385 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 09:31:03.550914   42385 kubeadm.go:157] found existing configuration files:
	
	I1018 09:31:03.550984   42385 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 09:31:03.562453   42385 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 09:31:03.562535   42385 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 09:31:03.574992   42385 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 09:31:03.586333   42385 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 09:31:03.586409   42385 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 09:31:03.598685   42385 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 09:31:03.609625   42385 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 09:31:03.609695   42385 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 09:31:03.622087   42385 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 09:31:03.633128   42385 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 09:31:03.633186   42385 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 09:31:03.645406   42385 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 09:31:03.657854   42385 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 09:31:03.715290   42385 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 09:31:04.590054   42385 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1018 09:31:04.826428   42385 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 09:31:04.893606   42385 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1018 09:31:04.980247   42385 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:31:04.980348   42385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:31:05.481216   42385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:31:05.981422   42385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:31:06.480944   42385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:31:06.981361   42385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:31:07.480714   42385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:31:07.509912   42385 api_server.go:72] duration metric: took 2.529677227s to wait for apiserver process to appear ...
	I1018 09:31:07.509977   42385 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:31:07.509999   42385 api_server.go:253] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
	I1018 09:31:07.510542   42385 api_server.go:269] stopped: https://192.168.39.249:8443/healthz: Get "https://192.168.39.249:8443/healthz": dial tcp 192.168.39.249:8443: connect: connection refused
	I1018 09:31:08.010233   42385 api_server.go:253] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
	I1018 09:31:09.941765   42385 api_server.go:279] https://192.168.39.249:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 09:31:09.941796   42385 api_server.go:103] status: https://192.168.39.249:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 09:31:09.941823   42385 api_server.go:253] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
	I1018 09:31:10.034549   42385 api_server.go:279] https://192.168.39.249:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 09:31:10.034586   42385 api_server.go:103] status: https://192.168.39.249:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 09:31:10.034604   42385 api_server.go:253] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
	I1018 09:31:10.068441   42385 api_server.go:279] https://192.168.39.249:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 09:31:10.068476   42385 api_server.go:103] status: https://192.168.39.249:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 09:31:10.510086   42385 api_server.go:253] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
	I1018 09:31:10.514984   42385 api_server.go:279] https://192.168.39.249:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:31:10.515030   42385 api_server.go:103] status: https://192.168.39.249:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:31:11.010344   42385 api_server.go:253] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
	I1018 09:31:11.016549   42385 api_server.go:279] https://192.168.39.249:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:31:11.016575   42385 api_server.go:103] status: https://192.168.39.249:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:31:11.510912   42385 api_server.go:253] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
	I1018 09:31:11.522642   42385 api_server.go:279] https://192.168.39.249:8443/healthz returned 200:
	ok
	I1018 09:31:11.534348   42385 api_server.go:141] control plane version: v1.32.0
	I1018 09:31:11.534386   42385 api_server.go:131] duration metric: took 4.024400743s to wait for apiserver health ...
	I1018 09:31:11.534399   42385 cni.go:84] Creating CNI manager for ""
	I1018 09:31:11.534408   42385 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 09:31:11.535911   42385 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1018 09:31:11.537478   42385 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1018 09:31:11.557977   42385 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1018 09:31:11.584673   42385 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:31:11.593007   42385 system_pods.go:59] 7 kube-system pods found
	I1018 09:31:11.593050   42385 system_pods.go:61] "coredns-668d6bf9bc-rvb76" [0d2504fe-15e2-4cd0-992b-94a4e43c2c6e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:31:11.593061   42385 system_pods.go:61] "etcd-test-preload-279124" [bea8a2d7-a9c2-4ecc-88ac-b8eeef1207ce] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:31:11.593072   42385 system_pods.go:61] "kube-apiserver-test-preload-279124" [70c895aa-61fb-4dd2-98ab-cfc9287ec9bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:31:11.593079   42385 system_pods.go:61] "kube-controller-manager-test-preload-279124" [90de3431-6dd4-47fa-8b9f-09f80d47128f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:31:11.593087   42385 system_pods.go:61] "kube-proxy-d5j2q" [cf473e5b-3c1a-4b5c-a975-93930c68c044] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 09:31:11.593094   42385 system_pods.go:61] "kube-scheduler-test-preload-279124" [834da6ca-8920-497f-852a-073342a02e37] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:31:11.593102   42385 system_pods.go:61] "storage-provisioner" [a4b49de5-8f79-4ad7-b2ea-53008b96c7e9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:31:11.593110   42385 system_pods.go:74] duration metric: took 8.40543ms to wait for pod list to return data ...
	I1018 09:31:11.593121   42385 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:31:11.598350   42385 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1018 09:31:11.598390   42385 node_conditions.go:123] node cpu capacity is 2
	I1018 09:31:11.598414   42385 node_conditions.go:105] duration metric: took 5.287161ms to run NodePressure ...
	I1018 09:31:11.598481   42385 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 09:31:11.868988   42385 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1018 09:31:11.872702   42385 kubeadm.go:743] kubelet initialised
	I1018 09:31:11.872736   42385 kubeadm.go:744] duration metric: took 3.716449ms waiting for restarted kubelet to initialise ...
	I1018 09:31:11.872757   42385 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 09:31:11.887555   42385 ops.go:34] apiserver oom_adj: -16
	I1018 09:31:11.887584   42385 kubeadm.go:601] duration metric: took 8.441805007s to restartPrimaryControlPlane
	I1018 09:31:11.887598   42385 kubeadm.go:402] duration metric: took 8.496824122s to StartCluster
	I1018 09:31:11.887630   42385 settings.go:142] acquiring lock: {Name:mk5c51ba919dd454ddb697f518b92637a3560487 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:11.887717   42385 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-6063/kubeconfig
	I1018 09:31:11.888372   42385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-6063/kubeconfig: {Name:mkb340db398364bcc27d468da7444ccfad7b82c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:11.888617   42385 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:31:11.888703   42385 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:31:11.888782   42385 addons.go:69] Setting storage-provisioner=true in profile "test-preload-279124"
	I1018 09:31:11.888804   42385 addons.go:238] Setting addon storage-provisioner=true in "test-preload-279124"
	W1018 09:31:11.888814   42385 addons.go:247] addon storage-provisioner should already be in state true
	I1018 09:31:11.888831   42385 addons.go:69] Setting default-storageclass=true in profile "test-preload-279124"
	I1018 09:31:11.888852   42385 host.go:66] Checking if "test-preload-279124" exists ...
	I1018 09:31:11.888849   42385 config.go:182] Loaded profile config "test-preload-279124": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1018 09:31:11.888857   42385 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-279124"
	I1018 09:31:11.889284   42385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:31:11.889315   42385 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:31:11.889331   42385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:31:11.889372   42385 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:31:11.890586   42385 out.go:179] * Verifying Kubernetes components...
	I1018 09:31:11.892204   42385 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:31:11.903875   42385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36807
	I1018 09:31:11.903891   42385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35519
	I1018 09:31:11.904379   42385 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:31:11.904506   42385 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:31:11.904851   42385 main.go:141] libmachine: Using API Version  1
	I1018 09:31:11.904878   42385 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:31:11.905037   42385 main.go:141] libmachine: Using API Version  1
	I1018 09:31:11.905065   42385 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:31:11.905280   42385 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:31:11.905371   42385 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:31:11.905476   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetState
	I1018 09:31:11.905966   42385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:31:11.905997   42385 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:31:11.907872   42385 kapi.go:59] client config for test-preload-279124: &rest.Config{Host:"https://192.168.39.249:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21767-6063/.minikube/profiles/test-preload-279124/client.crt", KeyFile:"/home/jenkins/minikube-integration/21767-6063/.minikube/profiles/test-preload-279124/client.key", CAFile:"/home/jenkins/minikube-integration/21767-6063/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 09:31:11.908256   42385 addons.go:238] Setting addon default-storageclass=true in "test-preload-279124"
	W1018 09:31:11.908277   42385 addons.go:247] addon default-storageclass should already be in state true
	I1018 09:31:11.908314   42385 host.go:66] Checking if "test-preload-279124" exists ...
	I1018 09:31:11.908674   42385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:31:11.908708   42385 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:31:11.920858   42385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42641
	I1018 09:31:11.921264   42385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42885
	I1018 09:31:11.921365   42385 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:31:11.921702   42385 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:31:11.921841   42385 main.go:141] libmachine: Using API Version  1
	I1018 09:31:11.921860   42385 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:31:11.922226   42385 main.go:141] libmachine: Using API Version  1
	I1018 09:31:11.922258   42385 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:31:11.922273   42385 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:31:11.922465   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetState
	I1018 09:31:11.922623   42385 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:31:11.923087   42385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:31:11.923116   42385 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:31:11.924636   42385 main.go:141] libmachine: (test-preload-279124) Calling .DriverName
	I1018 09:31:11.929502   42385 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:31:11.931444   42385 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:31:11.931459   42385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:31:11.931476   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHHostname
	I1018 09:31:11.935124   42385 main.go:141] libmachine: (test-preload-279124) DBG | domain test-preload-279124 has defined MAC address 52:54:00:c9:c4:40 in network mk-test-preload-279124
	I1018 09:31:11.935754   42385 main.go:141] libmachine: (test-preload-279124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:40", ip: ""} in network mk-test-preload-279124: {Iface:virbr1 ExpiryTime:2025-10-18 10:30:52 +0000 UTC Type:0 Mac:52:54:00:c9:c4:40 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:test-preload-279124 Clientid:01:52:54:00:c9:c4:40}
	I1018 09:31:11.935806   42385 main.go:141] libmachine: (test-preload-279124) DBG | domain test-preload-279124 has defined IP address 192.168.39.249 and MAC address 52:54:00:c9:c4:40 in network mk-test-preload-279124
	I1018 09:31:11.935963   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHPort
	I1018 09:31:11.936165   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHKeyPath
	I1018 09:31:11.936336   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHUsername
	I1018 09:31:11.936484   42385 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/test-preload-279124/id_rsa Username:docker}
	I1018 09:31:11.937590   42385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36425
	I1018 09:31:11.938044   42385 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:31:11.938397   42385 main.go:141] libmachine: Using API Version  1
	I1018 09:31:11.938419   42385 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:31:11.938835   42385 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:31:11.939033   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetState
	I1018 09:31:11.940961   42385 main.go:141] libmachine: (test-preload-279124) Calling .DriverName
	I1018 09:31:11.941156   42385 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:31:11.941172   42385 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:31:11.941188   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHHostname
	I1018 09:31:11.944290   42385 main.go:141] libmachine: (test-preload-279124) DBG | domain test-preload-279124 has defined MAC address 52:54:00:c9:c4:40 in network mk-test-preload-279124
	I1018 09:31:11.944711   42385 main.go:141] libmachine: (test-preload-279124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:40", ip: ""} in network mk-test-preload-279124: {Iface:virbr1 ExpiryTime:2025-10-18 10:30:52 +0000 UTC Type:0 Mac:52:54:00:c9:c4:40 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:test-preload-279124 Clientid:01:52:54:00:c9:c4:40}
	I1018 09:31:11.944735   42385 main.go:141] libmachine: (test-preload-279124) DBG | domain test-preload-279124 has defined IP address 192.168.39.249 and MAC address 52:54:00:c9:c4:40 in network mk-test-preload-279124
	I1018 09:31:11.944929   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHPort
	I1018 09:31:11.945102   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHKeyPath
	I1018 09:31:11.945243   42385 main.go:141] libmachine: (test-preload-279124) Calling .GetSSHUsername
	I1018 09:31:11.945369   42385 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/test-preload-279124/id_rsa Username:docker}
	I1018 09:31:12.160593   42385 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:31:12.203546   42385 node_ready.go:35] waiting up to 6m0s for node "test-preload-279124" to be "Ready" ...
	I1018 09:31:12.287360   42385 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:31:12.444138   42385 main.go:141] libmachine: Making call to close driver server
	I1018 09:31:12.444170   42385 main.go:141] libmachine: (test-preload-279124) Calling .Close
	I1018 09:31:12.444509   42385 main.go:141] libmachine: Successfully made call to close driver server
	I1018 09:31:12.444532   42385 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 09:31:12.444545   42385 main.go:141] libmachine: Making call to close driver server
	I1018 09:31:12.444556   42385 main.go:141] libmachine: (test-preload-279124) Calling .Close
	I1018 09:31:12.444784   42385 main.go:141] libmachine: Successfully made call to close driver server
	I1018 09:31:12.444806   42385 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 09:31:12.444832   42385 main.go:141] libmachine: (test-preload-279124) DBG | Closing plugin on server side
	I1018 09:31:12.453161   42385 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:31:12.453296   42385 main.go:141] libmachine: Making call to close driver server
	I1018 09:31:12.453318   42385 main.go:141] libmachine: (test-preload-279124) Calling .Close
	I1018 09:31:12.453619   42385 main.go:141] libmachine: Successfully made call to close driver server
	I1018 09:31:12.453638   42385 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 09:31:13.135568   42385 main.go:141] libmachine: Making call to close driver server
	I1018 09:31:13.135592   42385 main.go:141] libmachine: (test-preload-279124) Calling .Close
	I1018 09:31:13.135910   42385 main.go:141] libmachine: Successfully made call to close driver server
	I1018 09:31:13.135936   42385 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 09:31:13.135946   42385 main.go:141] libmachine: Making call to close driver server
	I1018 09:31:13.135953   42385 main.go:141] libmachine: (test-preload-279124) Calling .Close
	I1018 09:31:13.136227   42385 main.go:141] libmachine: Successfully made call to close driver server
	I1018 09:31:13.136243   42385 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 09:31:13.136273   42385 main.go:141] libmachine: (test-preload-279124) DBG | Closing plugin on server side
	I1018 09:31:13.138296   42385 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1018 09:31:13.139778   42385 addons.go:514] duration metric: took 1.251075492s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1018 09:31:14.208232   42385 node_ready.go:57] node "test-preload-279124" has "Ready":"False" status (will retry)
	W1018 09:31:16.707535   42385 node_ready.go:57] node "test-preload-279124" has "Ready":"False" status (will retry)
	W1018 09:31:19.207311   42385 node_ready.go:57] node "test-preload-279124" has "Ready":"False" status (will retry)
	I1018 09:31:20.707235   42385 node_ready.go:49] node "test-preload-279124" is "Ready"
	I1018 09:31:20.707275   42385 node_ready.go:38] duration metric: took 8.503653116s for node "test-preload-279124" to be "Ready" ...
	I1018 09:31:20.707290   42385 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:31:20.707344   42385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:31:20.736852   42385 api_server.go:72] duration metric: took 8.848203064s to wait for apiserver process to appear ...
	I1018 09:31:20.736881   42385 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:31:20.736899   42385 api_server.go:253] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
	I1018 09:31:20.747573   42385 api_server.go:279] https://192.168.39.249:8443/healthz returned 200:
	ok
	I1018 09:31:20.748615   42385 api_server.go:141] control plane version: v1.32.0
	I1018 09:31:20.748644   42385 api_server.go:131] duration metric: took 11.755531ms to wait for apiserver health ...
	I1018 09:31:20.748654   42385 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:31:20.752859   42385 system_pods.go:59] 7 kube-system pods found
	I1018 09:31:20.752892   42385 system_pods.go:61] "coredns-668d6bf9bc-rvb76" [0d2504fe-15e2-4cd0-992b-94a4e43c2c6e] Running
	I1018 09:31:20.752903   42385 system_pods.go:61] "etcd-test-preload-279124" [bea8a2d7-a9c2-4ecc-88ac-b8eeef1207ce] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:31:20.752913   42385 system_pods.go:61] "kube-apiserver-test-preload-279124" [70c895aa-61fb-4dd2-98ab-cfc9287ec9bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:31:20.752936   42385 system_pods.go:61] "kube-controller-manager-test-preload-279124" [90de3431-6dd4-47fa-8b9f-09f80d47128f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:31:20.752943   42385 system_pods.go:61] "kube-proxy-d5j2q" [cf473e5b-3c1a-4b5c-a975-93930c68c044] Running
	I1018 09:31:20.752960   42385 system_pods.go:61] "kube-scheduler-test-preload-279124" [834da6ca-8920-497f-852a-073342a02e37] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:31:20.752966   42385 system_pods.go:61] "storage-provisioner" [a4b49de5-8f79-4ad7-b2ea-53008b96c7e9] Running
	I1018 09:31:20.752976   42385 system_pods.go:74] duration metric: took 4.314841ms to wait for pod list to return data ...
	I1018 09:31:20.752987   42385 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:31:20.763527   42385 default_sa.go:45] found service account: "default"
	I1018 09:31:20.763559   42385 default_sa.go:55] duration metric: took 10.564383ms for default service account to be created ...
	I1018 09:31:20.763571   42385 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:31:20.766623   42385 system_pods.go:86] 7 kube-system pods found
	I1018 09:31:20.766658   42385 system_pods.go:89] "coredns-668d6bf9bc-rvb76" [0d2504fe-15e2-4cd0-992b-94a4e43c2c6e] Running
	I1018 09:31:20.766667   42385 system_pods.go:89] "etcd-test-preload-279124" [bea8a2d7-a9c2-4ecc-88ac-b8eeef1207ce] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:31:20.766688   42385 system_pods.go:89] "kube-apiserver-test-preload-279124" [70c895aa-61fb-4dd2-98ab-cfc9287ec9bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:31:20.766699   42385 system_pods.go:89] "kube-controller-manager-test-preload-279124" [90de3431-6dd4-47fa-8b9f-09f80d47128f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:31:20.766703   42385 system_pods.go:89] "kube-proxy-d5j2q" [cf473e5b-3c1a-4b5c-a975-93930c68c044] Running
	I1018 09:31:20.766708   42385 system_pods.go:89] "kube-scheduler-test-preload-279124" [834da6ca-8920-497f-852a-073342a02e37] Running
	I1018 09:31:20.766712   42385 system_pods.go:89] "storage-provisioner" [a4b49de5-8f79-4ad7-b2ea-53008b96c7e9] Running
	I1018 09:31:20.766720   42385 system_pods.go:126] duration metric: took 3.142994ms to wait for k8s-apps to be running ...
	I1018 09:31:20.766732   42385 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:31:20.766777   42385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:31:20.784678   42385 system_svc.go:56] duration metric: took 17.936569ms WaitForService to wait for kubelet
	I1018 09:31:20.784712   42385 kubeadm.go:586] duration metric: took 8.896068689s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:31:20.784729   42385 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:31:20.788334   42385 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1018 09:31:20.788361   42385 node_conditions.go:123] node cpu capacity is 2
	I1018 09:31:20.788374   42385 node_conditions.go:105] duration metric: took 3.641ms to run NodePressure ...
	I1018 09:31:20.788385   42385 start.go:241] waiting for startup goroutines ...
	I1018 09:31:20.788392   42385 start.go:246] waiting for cluster config update ...
	I1018 09:31:20.788402   42385 start.go:255] writing updated cluster config ...
	I1018 09:31:20.788728   42385 ssh_runner.go:195] Run: rm -f paused
	I1018 09:31:20.794073   42385 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:31:20.794595   42385 kapi.go:59] client config for test-preload-279124: &rest.Config{Host:"https://192.168.39.249:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21767-6063/.minikube/profiles/test-preload-279124/client.crt", KeyFile:"/home/jenkins/minikube-integration/21767-6063/.minikube/profiles/test-preload-279124/client.key", CAFile:"/home/jenkins/minikube-integration/21767-6063/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 09:31:20.802948   42385 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-rvb76" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:31:20.808463   42385 pod_ready.go:94] pod "coredns-668d6bf9bc-rvb76" is "Ready"
	I1018 09:31:20.808494   42385 pod_ready.go:86] duration metric: took 5.511362ms for pod "coredns-668d6bf9bc-rvb76" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:31:20.810978   42385 pod_ready.go:83] waiting for pod "etcd-test-preload-279124" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 09:31:22.817045   42385 pod_ready.go:104] pod "etcd-test-preload-279124" is not "Ready", error: <nil>
	I1018 09:31:24.817493   42385 pod_ready.go:94] pod "etcd-test-preload-279124" is "Ready"
	I1018 09:31:24.817532   42385 pod_ready.go:86] duration metric: took 4.006527302s for pod "etcd-test-preload-279124" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:31:24.819946   42385 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-279124" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:31:26.325554   42385 pod_ready.go:94] pod "kube-apiserver-test-preload-279124" is "Ready"
	I1018 09:31:26.325584   42385 pod_ready.go:86] duration metric: took 1.505606375s for pod "kube-apiserver-test-preload-279124" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:31:26.327531   42385 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-279124" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:31:26.331667   42385 pod_ready.go:94] pod "kube-controller-manager-test-preload-279124" is "Ready"
	I1018 09:31:26.331696   42385 pod_ready.go:86] duration metric: took 4.140047ms for pod "kube-controller-manager-test-preload-279124" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:31:26.334302   42385 pod_ready.go:83] waiting for pod "kube-proxy-d5j2q" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:31:26.339308   42385 pod_ready.go:94] pod "kube-proxy-d5j2q" is "Ready"
	I1018 09:31:26.339332   42385 pod_ready.go:86] duration metric: took 5.001986ms for pod "kube-proxy-d5j2q" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:31:26.414843   42385 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-279124" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:31:26.814187   42385 pod_ready.go:94] pod "kube-scheduler-test-preload-279124" is "Ready"
	I1018 09:31:26.814233   42385 pod_ready.go:86] duration metric: took 399.36186ms for pod "kube-scheduler-test-preload-279124" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:31:26.814246   42385 pod_ready.go:40] duration metric: took 6.020128766s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:31:26.857687   42385 start.go:624] kubectl: 1.34.1, cluster: 1.32.0 (minor skew: 2)
	I1018 09:31:26.859665   42385 out.go:203] 
	W1018 09:31:26.861312   42385 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.32.0.
	I1018 09:31:26.862912   42385 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1018 09:31:26.864744   42385 out.go:179] * Done! kubectl is now configured to use "test-preload-279124" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 09:31:27 test-preload-279124 crio[824]: time="2025-10-18 09:31:27.789331905Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760779887789308432,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b11a60ac-588f-495f-8351-034821b8a6e4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 09:31:27 test-preload-279124 crio[824]: time="2025-10-18 09:31:27.789923712Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a9d6a8be-fc2f-469e-846c-718c1cb90933 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:31:27 test-preload-279124 crio[824]: time="2025-10-18 09:31:27.789973021Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a9d6a8be-fc2f-469e-846c-718c1cb90933 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:31:27 test-preload-279124 crio[824]: time="2025-10-18 09:31:27.790152846Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d5546db0fb149112f1755284513bf384fe9020398990dc752a08d67d05b08e2,PodSandboxId:5b75ee6ee23d4226ec288933093b456e0f53a95a4c2d7159bf5676c39c93b8d8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760779879009038139,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-rvb76,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d2504fe-15e2-4cd0-992b-94a4e43c2c6e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02868596cc7952365bf7625067f265a0aeefd22f11b67e2a058a83e559107bf4,PodSandboxId:8824fdeaf40c18ca1d32be1a02a26e76d7f23fdcc31cbf5a54dcbd90480b1750,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760779871410009310,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d5j2q,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: cf473e5b-3c1a-4b5c-a975-93930c68c044,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4589afdbed40f9e20f0a72eb61778018fbcc7ff9f1671d2c698df749796eb443,PodSandboxId:ac7b7ff1cabeb223daa387c6e785b4571056a5b3d931043f20dff753fa9bd2e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760779871387099716,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4
b49de5-8f79-4ad7-b2ea-53008b96c7e9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d58c79893cc77f62e8806834bdcd2bb2938f63beb0423fbf847fc67754020b0d,PodSandboxId:e6a81a2bd168c72a55381826fd282764207f1b4c72dc4d87528cb53982d4a83b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760779867173937459,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-279124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 790a502c5
e510296d1cedd1d076a0736,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef36a8d6ac39ec070faa55933b02adb518125bc0132898246e472450530ff65c,PodSandboxId:89a268ffbd60e43e7dccc4ef29fdaa1f392068ca9dc9b3ce258c64854b651f1b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760779867148508690,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-279124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: fcd326d30907fd751fc3d82c4796aec1,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c5f51f69f225c903623754410af0f26cf12ad00ca9e75dc4a9941f41260b8c3,PodSandboxId:de58100215a33e4020078217f76f2c6656a0d7d37ed1af28c71134ec4a665d61,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760779867115547382,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-279124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd7b1a24522b779b2cc44fab08771609,}
,Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:776114c7c8775f1a34a36dfd6b87a9f2ea688156b657262f83f7b4e2b1b0d64a,PodSandboxId:c4af6eeac7bb39e0672dd16a387b5d6d4b7614032ed62f6643a65289c9f0be9a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760779867118808092,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-279124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c7fd623e1ac2e66201b4e9740fa862a,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a9d6a8be-fc2f-469e-846c-718c1cb90933 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:31:27 test-preload-279124 crio[824]: time="2025-10-18 09:31:27.830728152Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=03650ca8-2b39-4036-97bc-27d8bba1d192 name=/runtime.v1.RuntimeService/Version
	Oct 18 09:31:27 test-preload-279124 crio[824]: time="2025-10-18 09:31:27.830811430Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=03650ca8-2b39-4036-97bc-27d8bba1d192 name=/runtime.v1.RuntimeService/Version
	Oct 18 09:31:27 test-preload-279124 crio[824]: time="2025-10-18 09:31:27.832399628Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=59ba6a35-e16d-4b6f-aca7-075709aab9c6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 09:31:27 test-preload-279124 crio[824]: time="2025-10-18 09:31:27.832886390Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760779887832863334,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=59ba6a35-e16d-4b6f-aca7-075709aab9c6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 09:31:27 test-preload-279124 crio[824]: time="2025-10-18 09:31:27.833507568Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=683a1e40-df9a-4ea7-8847-c1cc846029fa name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:31:27 test-preload-279124 crio[824]: time="2025-10-18 09:31:27.833613198Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=683a1e40-df9a-4ea7-8847-c1cc846029fa name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:31:27 test-preload-279124 crio[824]: time="2025-10-18 09:31:27.834793095Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d5546db0fb149112f1755284513bf384fe9020398990dc752a08d67d05b08e2,PodSandboxId:5b75ee6ee23d4226ec288933093b456e0f53a95a4c2d7159bf5676c39c93b8d8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760779879009038139,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-rvb76,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d2504fe-15e2-4cd0-992b-94a4e43c2c6e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02868596cc7952365bf7625067f265a0aeefd22f11b67e2a058a83e559107bf4,PodSandboxId:8824fdeaf40c18ca1d32be1a02a26e76d7f23fdcc31cbf5a54dcbd90480b1750,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760779871410009310,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d5j2q,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: cf473e5b-3c1a-4b5c-a975-93930c68c044,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4589afdbed40f9e20f0a72eb61778018fbcc7ff9f1671d2c698df749796eb443,PodSandboxId:ac7b7ff1cabeb223daa387c6e785b4571056a5b3d931043f20dff753fa9bd2e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760779871387099716,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4
b49de5-8f79-4ad7-b2ea-53008b96c7e9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d58c79893cc77f62e8806834bdcd2bb2938f63beb0423fbf847fc67754020b0d,PodSandboxId:e6a81a2bd168c72a55381826fd282764207f1b4c72dc4d87528cb53982d4a83b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760779867173937459,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-279124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 790a502c5
e510296d1cedd1d076a0736,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef36a8d6ac39ec070faa55933b02adb518125bc0132898246e472450530ff65c,PodSandboxId:89a268ffbd60e43e7dccc4ef29fdaa1f392068ca9dc9b3ce258c64854b651f1b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760779867148508690,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-279124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: fcd326d30907fd751fc3d82c4796aec1,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c5f51f69f225c903623754410af0f26cf12ad00ca9e75dc4a9941f41260b8c3,PodSandboxId:de58100215a33e4020078217f76f2c6656a0d7d37ed1af28c71134ec4a665d61,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760779867115547382,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-279124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd7b1a24522b779b2cc44fab08771609,}
,Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:776114c7c8775f1a34a36dfd6b87a9f2ea688156b657262f83f7b4e2b1b0d64a,PodSandboxId:c4af6eeac7bb39e0672dd16a387b5d6d4b7614032ed62f6643a65289c9f0be9a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760779867118808092,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-279124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c7fd623e1ac2e66201b4e9740fa862a,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=683a1e40-df9a-4ea7-8847-c1cc846029fa name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:31:27 test-preload-279124 crio[824]: time="2025-10-18 09:31:27.881155520Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ab67f80d-b21e-4739-8ff8-51fe9e56af49 name=/runtime.v1.RuntimeService/Version
	Oct 18 09:31:27 test-preload-279124 crio[824]: time="2025-10-18 09:31:27.881237695Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ab67f80d-b21e-4739-8ff8-51fe9e56af49 name=/runtime.v1.RuntimeService/Version
	Oct 18 09:31:27 test-preload-279124 crio[824]: time="2025-10-18 09:31:27.882819597Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=129c416d-0387-4528-b01b-74ef472445ec name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 09:31:27 test-preload-279124 crio[824]: time="2025-10-18 09:31:27.883241768Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760779887883217419,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=129c416d-0387-4528-b01b-74ef472445ec name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 09:31:27 test-preload-279124 crio[824]: time="2025-10-18 09:31:27.883973850Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c22bd2b4-0374-4c35-b6e9-2cf42a2174ab name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:31:27 test-preload-279124 crio[824]: time="2025-10-18 09:31:27.884081137Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c22bd2b4-0374-4c35-b6e9-2cf42a2174ab name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:31:27 test-preload-279124 crio[824]: time="2025-10-18 09:31:27.884263009Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d5546db0fb149112f1755284513bf384fe9020398990dc752a08d67d05b08e2,PodSandboxId:5b75ee6ee23d4226ec288933093b456e0f53a95a4c2d7159bf5676c39c93b8d8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760779879009038139,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-rvb76,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d2504fe-15e2-4cd0-992b-94a4e43c2c6e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02868596cc7952365bf7625067f265a0aeefd22f11b67e2a058a83e559107bf4,PodSandboxId:8824fdeaf40c18ca1d32be1a02a26e76d7f23fdcc31cbf5a54dcbd90480b1750,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760779871410009310,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d5j2q,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: cf473e5b-3c1a-4b5c-a975-93930c68c044,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4589afdbed40f9e20f0a72eb61778018fbcc7ff9f1671d2c698df749796eb443,PodSandboxId:ac7b7ff1cabeb223daa387c6e785b4571056a5b3d931043f20dff753fa9bd2e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760779871387099716,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4
b49de5-8f79-4ad7-b2ea-53008b96c7e9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d58c79893cc77f62e8806834bdcd2bb2938f63beb0423fbf847fc67754020b0d,PodSandboxId:e6a81a2bd168c72a55381826fd282764207f1b4c72dc4d87528cb53982d4a83b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760779867173937459,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-279124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 790a502c5
e510296d1cedd1d076a0736,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef36a8d6ac39ec070faa55933b02adb518125bc0132898246e472450530ff65c,PodSandboxId:89a268ffbd60e43e7dccc4ef29fdaa1f392068ca9dc9b3ce258c64854b651f1b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760779867148508690,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-279124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: fcd326d30907fd751fc3d82c4796aec1,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c5f51f69f225c903623754410af0f26cf12ad00ca9e75dc4a9941f41260b8c3,PodSandboxId:de58100215a33e4020078217f76f2c6656a0d7d37ed1af28c71134ec4a665d61,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760779867115547382,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-279124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd7b1a24522b779b2cc44fab08771609,}
,Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:776114c7c8775f1a34a36dfd6b87a9f2ea688156b657262f83f7b4e2b1b0d64a,PodSandboxId:c4af6eeac7bb39e0672dd16a387b5d6d4b7614032ed62f6643a65289c9f0be9a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760779867118808092,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-279124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c7fd623e1ac2e66201b4e9740fa862a,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c22bd2b4-0374-4c35-b6e9-2cf42a2174ab name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:31:27 test-preload-279124 crio[824]: time="2025-10-18 09:31:27.921831535Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f745e84a-3ef9-46cb-b7d9-f4042b79fbfb name=/runtime.v1.RuntimeService/Version
	Oct 18 09:31:27 test-preload-279124 crio[824]: time="2025-10-18 09:31:27.921906015Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f745e84a-3ef9-46cb-b7d9-f4042b79fbfb name=/runtime.v1.RuntimeService/Version
	Oct 18 09:31:27 test-preload-279124 crio[824]: time="2025-10-18 09:31:27.923027095Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d2b9f764-5966-4ac5-a29e-3550ecdf9d2a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 09:31:27 test-preload-279124 crio[824]: time="2025-10-18 09:31:27.923581598Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760779887923554491,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d2b9f764-5966-4ac5-a29e-3550ecdf9d2a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 09:31:27 test-preload-279124 crio[824]: time="2025-10-18 09:31:27.924296406Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fd1355ec-9776-438e-b1ef-2bd9425a8db8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:31:27 test-preload-279124 crio[824]: time="2025-10-18 09:31:27.924616812Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fd1355ec-9776-438e-b1ef-2bd9425a8db8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:31:27 test-preload-279124 crio[824]: time="2025-10-18 09:31:27.924937447Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d5546db0fb149112f1755284513bf384fe9020398990dc752a08d67d05b08e2,PodSandboxId:5b75ee6ee23d4226ec288933093b456e0f53a95a4c2d7159bf5676c39c93b8d8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760779879009038139,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-rvb76,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d2504fe-15e2-4cd0-992b-94a4e43c2c6e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02868596cc7952365bf7625067f265a0aeefd22f11b67e2a058a83e559107bf4,PodSandboxId:8824fdeaf40c18ca1d32be1a02a26e76d7f23fdcc31cbf5a54dcbd90480b1750,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760779871410009310,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d5j2q,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: cf473e5b-3c1a-4b5c-a975-93930c68c044,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4589afdbed40f9e20f0a72eb61778018fbcc7ff9f1671d2c698df749796eb443,PodSandboxId:ac7b7ff1cabeb223daa387c6e785b4571056a5b3d931043f20dff753fa9bd2e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760779871387099716,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4
b49de5-8f79-4ad7-b2ea-53008b96c7e9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d58c79893cc77f62e8806834bdcd2bb2938f63beb0423fbf847fc67754020b0d,PodSandboxId:e6a81a2bd168c72a55381826fd282764207f1b4c72dc4d87528cb53982d4a83b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760779867173937459,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-279124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 790a502c5
e510296d1cedd1d076a0736,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef36a8d6ac39ec070faa55933b02adb518125bc0132898246e472450530ff65c,PodSandboxId:89a268ffbd60e43e7dccc4ef29fdaa1f392068ca9dc9b3ce258c64854b651f1b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760779867148508690,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-279124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: fcd326d30907fd751fc3d82c4796aec1,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c5f51f69f225c903623754410af0f26cf12ad00ca9e75dc4a9941f41260b8c3,PodSandboxId:de58100215a33e4020078217f76f2c6656a0d7d37ed1af28c71134ec4a665d61,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760779867115547382,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-279124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd7b1a24522b779b2cc44fab08771609,}
,Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:776114c7c8775f1a34a36dfd6b87a9f2ea688156b657262f83f7b4e2b1b0d64a,PodSandboxId:c4af6eeac7bb39e0672dd16a387b5d6d4b7614032ed62f6643a65289c9f0be9a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760779867118808092,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-279124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c7fd623e1ac2e66201b4e9740fa862a,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fd1355ec-9776-438e-b1ef-2bd9425a8db8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5d5546db0fb14       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   8 seconds ago       Running             coredns                   1                   5b75ee6ee23d4       coredns-668d6bf9bc-rvb76
	02868596cc795       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   16 seconds ago      Running             kube-proxy                1                   8824fdeaf40c1       kube-proxy-d5j2q
	4589afdbed40f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago      Running             storage-provisioner       2                   ac7b7ff1cabeb       storage-provisioner
	d58c79893cc77       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   20 seconds ago      Running             kube-scheduler            1                   e6a81a2bd168c       kube-scheduler-test-preload-279124
	ef36a8d6ac39e       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   20 seconds ago      Running             kube-controller-manager   1                   89a268ffbd60e       kube-controller-manager-test-preload-279124
	776114c7c8775       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   20 seconds ago      Running             kube-apiserver            1                   c4af6eeac7bb3       kube-apiserver-test-preload-279124
	9c5f51f69f225       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   20 seconds ago      Running             etcd                      1                   de58100215a33       etcd-test-preload-279124
	
	
	==> coredns [5d5546db0fb149112f1755284513bf384fe9020398990dc752a08d67d05b08e2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:40440 - 41677 "HINFO IN 5343626314589026208.7368612891556455570. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.081067249s
	
	
	==> describe nodes <==
	Name:               test-preload-279124
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-279124
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=test-preload-279124
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_29_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:29:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-279124
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:31:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:31:20 +0000   Sat, 18 Oct 2025 09:29:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:31:20 +0000   Sat, 18 Oct 2025 09:29:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:31:20 +0000   Sat, 18 Oct 2025 09:29:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:31:20 +0000   Sat, 18 Oct 2025 09:31:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.249
	  Hostname:    test-preload-279124
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 a66a8098790f4c97b2267e19429e092f
	  System UUID:                a66a8098-790f-4c97-b226-7e19429e092f
	  Boot ID:                    f9d8e4a5-a818-43a0-a96c-496cfbdfb07d
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-rvb76                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     106s
	  kube-system                 etcd-test-preload-279124                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         111s
	  kube-system                 kube-apiserver-test-preload-279124             250m (12%)    0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-test-preload-279124    200m (10%)    0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-d5j2q                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-test-preload-279124             100m (5%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 103s               kube-proxy       
	  Normal   Starting                 16s                kube-proxy       
	  Normal   NodeHasSufficientMemory  111s               kubelet          Node test-preload-279124 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  111s               kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    111s               kubelet          Node test-preload-279124 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     111s               kubelet          Node test-preload-279124 status is now: NodeHasSufficientPID
	  Normal   Starting                 111s               kubelet          Starting kubelet.
	  Normal   NodeReady                110s               kubelet          Node test-preload-279124 status is now: NodeReady
	  Normal   RegisteredNode           107s               node-controller  Node test-preload-279124 event: Registered Node test-preload-279124 in Controller
	  Normal   Starting                 24s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node test-preload-279124 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node test-preload-279124 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     23s (x7 over 23s)  kubelet          Node test-preload-279124 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 18s                kubelet          Node test-preload-279124 has been rebooted, boot id: f9d8e4a5-a818-43a0-a96c-496cfbdfb07d
	  Normal   RegisteredNode           15s                node-controller  Node test-preload-279124 event: Registered Node test-preload-279124 in Controller
	
	
	==> dmesg <==
	[Oct18 09:30] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000059] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002712] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.985128] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000004] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.091078] kauditd_printk_skb: 4 callbacks suppressed
	[Oct18 09:31] kauditd_printk_skb: 102 callbacks suppressed
	[  +6.508915] kauditd_printk_skb: 177 callbacks suppressed
	[  +0.000036] kauditd_printk_skb: 128 callbacks suppressed
	[  +5.047328] kauditd_printk_skb: 65 callbacks suppressed
	
	
	==> etcd [9c5f51f69f225c903623754410af0f26cf12ad00ca9e75dc4a9941f41260b8c3] <==
	{"level":"info","ts":"2025-10-18T09:31:07.583977Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-18T09:31:07.586651Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.249:2380"}
	{"level":"info","ts":"2025-10-18T09:31:07.587000Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.249:2380"}
	{"level":"info","ts":"2025-10-18T09:31:07.582379Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 switched to configuration voters=(3571047793177318727)"}
	{"level":"info","ts":"2025-10-18T09:31:07.587108Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ba21282e7acd13d6","local-member-id":"318ee90c3446d547","added-peer-id":"318ee90c3446d547","added-peer-peer-urls":["https://192.168.39.249:2380"]}
	{"level":"info","ts":"2025-10-18T09:31:07.587238Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ba21282e7acd13d6","local-member-id":"318ee90c3446d547","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T09:31:07.587276Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T09:31:07.590488Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"318ee90c3446d547","initial-advertise-peer-urls":["https://192.168.39.249:2380"],"listen-peer-urls":["https://192.168.39.249:2380"],"advertise-client-urls":["https://192.168.39.249:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.249:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-18T09:31:07.598623Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-18T09:31:08.854085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-18T09:31:08.854126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-18T09:31:08.854140Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 received MsgPreVoteResp from 318ee90c3446d547 at term 2"}
	{"level":"info","ts":"2025-10-18T09:31:08.854151Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 became candidate at term 3"}
	{"level":"info","ts":"2025-10-18T09:31:08.854157Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 received MsgVoteResp from 318ee90c3446d547 at term 3"}
	{"level":"info","ts":"2025-10-18T09:31:08.854166Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 became leader at term 3"}
	{"level":"info","ts":"2025-10-18T09:31:08.854172Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 318ee90c3446d547 elected leader 318ee90c3446d547 at term 3"}
	{"level":"info","ts":"2025-10-18T09:31:08.856796Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"318ee90c3446d547","local-member-attributes":"{Name:test-preload-279124 ClientURLs:[https://192.168.39.249:2379]}","request-path":"/0/members/318ee90c3446d547/attributes","cluster-id":"ba21282e7acd13d6","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-18T09:31:08.856995Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T09:31:08.857085Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-18T09:31:08.857109Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-18T09:31:08.857017Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T09:31:08.857861Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-18T09:31:08.857866Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-18T09:31:08.858373Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-18T09:31:08.858887Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.249:2379"}
	
	
	==> kernel <==
	 09:31:28 up 0 min,  0 users,  load average: 0.76, 0.22, 0.07
	Linux test-preload-279124 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [776114c7c8775f1a34a36dfd6b87a9f2ea688156b657262f83f7b4e2b1b0d64a] <==
	I1018 09:31:10.099606       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 09:31:10.099684       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 09:31:10.100077       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1018 09:31:10.100121       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 09:31:10.100285       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 09:31:10.105606       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1018 09:31:10.107027       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 09:31:10.112151       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1018 09:31:10.133149       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1018 09:31:10.133211       1 aggregator.go:171] initial CRD sync complete...
	I1018 09:31:10.133218       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 09:31:10.133224       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 09:31:10.133228       1 cache.go:39] Caches are synced for autoregister controller
	I1018 09:31:10.143689       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1018 09:31:10.143717       1 policy_source.go:240] refreshing policies
	I1018 09:31:10.154404       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:31:10.909362       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:31:11.046189       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1018 09:31:11.723321       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1018 09:31:11.764623       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1018 09:31:11.804891       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:31:11.816892       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:31:13.616933       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1018 09:31:13.665911       1 controller.go:615] quota admission added evaluator for: endpoints
	I1018 09:31:13.716765       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [ef36a8d6ac39ec070faa55933b02adb518125bc0132898246e472450530ff65c] <==
	I1018 09:31:13.254244       1 shared_informer.go:320] Caches are synced for attach detach
	I1018 09:31:13.259892       1 shared_informer.go:320] Caches are synced for PV protection
	I1018 09:31:13.259940       1 shared_informer.go:320] Caches are synced for crt configmap
	I1018 09:31:13.263611       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I1018 09:31:13.263629       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I1018 09:31:13.263853       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1018 09:31:13.264314       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I1018 09:31:13.264378       1 shared_informer.go:320] Caches are synced for disruption
	I1018 09:31:13.264386       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1018 09:31:13.264393       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1018 09:31:13.267237       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1018 09:31:13.268383       1 shared_informer.go:320] Caches are synced for endpoint
	I1018 09:31:13.274701       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1018 09:31:13.274922       1 shared_informer.go:320] Caches are synced for garbage collector
	I1018 09:31:13.287217       1 shared_informer.go:320] Caches are synced for garbage collector
	I1018 09:31:13.287536       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 09:31:13.287549       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 09:31:13.625175       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="361.283456ms"
	I1018 09:31:13.625839       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="91.54µs"
	I1018 09:31:19.128817       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="105.033µs"
	I1018 09:31:20.148638       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="22.534485ms"
	I1018 09:31:20.148741       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="48.65µs"
	I1018 09:31:20.301075       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-279124"
	I1018 09:31:20.314910       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-279124"
	I1018 09:31:23.217024       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [02868596cc7952365bf7625067f265a0aeefd22f11b67e2a058a83e559107bf4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1018 09:31:11.690883       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1018 09:31:11.709832       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.249"]
	E1018 09:31:11.710036       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:31:11.769232       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1018 09:31:11.769316       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1018 09:31:11.769349       1 server_linux.go:170] "Using iptables Proxier"
	I1018 09:31:11.778088       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:31:11.778376       1 server.go:497] "Version info" version="v1.32.0"
	I1018 09:31:11.778387       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:31:11.779930       1 config.go:199] "Starting service config controller"
	I1018 09:31:11.779969       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1018 09:31:11.779994       1 config.go:105] "Starting endpoint slice config controller"
	I1018 09:31:11.779998       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1018 09:31:11.780932       1 config.go:329] "Starting node config controller"
	I1018 09:31:11.780953       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1018 09:31:11.880540       1 shared_informer.go:320] Caches are synced for service config
	I1018 09:31:11.880697       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1018 09:31:11.881115       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d58c79893cc77f62e8806834bdcd2bb2938f63beb0423fbf847fc67754020b0d] <==
	W1018 09:31:10.089221       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1018 09:31:10.089232       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1018 09:31:10.089272       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1018 09:31:10.089282       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1018 09:31:10.091319       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1018 09:31:10.091367       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1018 09:31:10.091649       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1018 09:31:10.091684       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1018 09:31:10.091729       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1018 09:31:10.091757       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1018 09:31:10.091795       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1018 09:31:10.091821       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1018 09:31:10.091859       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E1018 09:31:10.091870       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1018 09:31:10.091912       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1018 09:31:10.091936       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1018 09:31:10.091978       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1018 09:31:10.091996       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1018 09:31:10.092048       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1018 09:31:10.092144       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1018 09:31:10.092199       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1018 09:31:10.092225       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1018 09:31:10.092048       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1018 09:31:10.094261       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1018 09:31:10.180141       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 18 09:31:10 test-preload-279124 kubelet[1147]: E1018 09:31:10.235538    1147 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-279124\" already exists" pod="kube-system/kube-controller-manager-test-preload-279124"
	Oct 18 09:31:10 test-preload-279124 kubelet[1147]: I1018 09:31:10.235575    1147 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-279124"
	Oct 18 09:31:10 test-preload-279124 kubelet[1147]: E1018 09:31:10.243074    1147 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-279124\" already exists" pod="kube-system/kube-scheduler-test-preload-279124"
	Oct 18 09:31:10 test-preload-279124 kubelet[1147]: I1018 09:31:10.706332    1147 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-279124"
	Oct 18 09:31:10 test-preload-279124 kubelet[1147]: E1018 09:31:10.715734    1147 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-279124\" already exists" pod="kube-system/kube-scheduler-test-preload-279124"
	Oct 18 09:31:10 test-preload-279124 kubelet[1147]: I1018 09:31:10.931783    1147 apiserver.go:52] "Watching apiserver"
	Oct 18 09:31:10 test-preload-279124 kubelet[1147]: E1018 09:31:10.936824    1147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-rvb76" podUID="0d2504fe-15e2-4cd0-992b-94a4e43c2c6e"
	Oct 18 09:31:10 test-preload-279124 kubelet[1147]: I1018 09:31:10.956497    1147 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Oct 18 09:31:11 test-preload-279124 kubelet[1147]: I1018 09:31:11.035560    1147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a4b49de5-8f79-4ad7-b2ea-53008b96c7e9-tmp\") pod \"storage-provisioner\" (UID: \"a4b49de5-8f79-4ad7-b2ea-53008b96c7e9\") " pod="kube-system/storage-provisioner"
	Oct 18 09:31:11 test-preload-279124 kubelet[1147]: I1018 09:31:11.035612    1147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf473e5b-3c1a-4b5c-a975-93930c68c044-lib-modules\") pod \"kube-proxy-d5j2q\" (UID: \"cf473e5b-3c1a-4b5c-a975-93930c68c044\") " pod="kube-system/kube-proxy-d5j2q"
	Oct 18 09:31:11 test-preload-279124 kubelet[1147]: I1018 09:31:11.035654    1147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf473e5b-3c1a-4b5c-a975-93930c68c044-xtables-lock\") pod \"kube-proxy-d5j2q\" (UID: \"cf473e5b-3c1a-4b5c-a975-93930c68c044\") " pod="kube-system/kube-proxy-d5j2q"
	Oct 18 09:31:11 test-preload-279124 kubelet[1147]: E1018 09:31:11.036172    1147 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 18 09:31:11 test-preload-279124 kubelet[1147]: E1018 09:31:11.036248    1147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0d2504fe-15e2-4cd0-992b-94a4e43c2c6e-config-volume podName:0d2504fe-15e2-4cd0-992b-94a4e43c2c6e nodeName:}" failed. No retries permitted until 2025-10-18 09:31:11.536224763 +0000 UTC m=+6.712665459 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0d2504fe-15e2-4cd0-992b-94a4e43c2c6e-config-volume") pod "coredns-668d6bf9bc-rvb76" (UID: "0d2504fe-15e2-4cd0-992b-94a4e43c2c6e") : object "kube-system"/"coredns" not registered
	Oct 18 09:31:11 test-preload-279124 kubelet[1147]: E1018 09:31:11.539142    1147 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 18 09:31:11 test-preload-279124 kubelet[1147]: E1018 09:31:11.539219    1147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0d2504fe-15e2-4cd0-992b-94a4e43c2c6e-config-volume podName:0d2504fe-15e2-4cd0-992b-94a4e43c2c6e nodeName:}" failed. No retries permitted until 2025-10-18 09:31:12.539205082 +0000 UTC m=+7.715645793 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0d2504fe-15e2-4cd0-992b-94a4e43c2c6e-config-volume") pod "coredns-668d6bf9bc-rvb76" (UID: "0d2504fe-15e2-4cd0-992b-94a4e43c2c6e") : object "kube-system"/"coredns" not registered
	Oct 18 09:31:12 test-preload-279124 kubelet[1147]: E1018 09:31:12.548978    1147 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 18 09:31:12 test-preload-279124 kubelet[1147]: E1018 09:31:12.549179    1147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0d2504fe-15e2-4cd0-992b-94a4e43c2c6e-config-volume podName:0d2504fe-15e2-4cd0-992b-94a4e43c2c6e nodeName:}" failed. No retries permitted until 2025-10-18 09:31:14.549163873 +0000 UTC m=+9.725604572 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0d2504fe-15e2-4cd0-992b-94a4e43c2c6e-config-volume") pod "coredns-668d6bf9bc-rvb76" (UID: "0d2504fe-15e2-4cd0-992b-94a4e43c2c6e") : object "kube-system"/"coredns" not registered
	Oct 18 09:31:12 test-preload-279124 kubelet[1147]: E1018 09:31:12.977620    1147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-rvb76" podUID="0d2504fe-15e2-4cd0-992b-94a4e43c2c6e"
	Oct 18 09:31:14 test-preload-279124 kubelet[1147]: E1018 09:31:14.568496    1147 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 18 09:31:14 test-preload-279124 kubelet[1147]: E1018 09:31:14.568615    1147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0d2504fe-15e2-4cd0-992b-94a4e43c2c6e-config-volume podName:0d2504fe-15e2-4cd0-992b-94a4e43c2c6e nodeName:}" failed. No retries permitted until 2025-10-18 09:31:18.568569075 +0000 UTC m=+13.745009772 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0d2504fe-15e2-4cd0-992b-94a4e43c2c6e-config-volume") pod "coredns-668d6bf9bc-rvb76" (UID: "0d2504fe-15e2-4cd0-992b-94a4e43c2c6e") : object "kube-system"/"coredns" not registered
	Oct 18 09:31:14 test-preload-279124 kubelet[1147]: E1018 09:31:14.985139    1147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-rvb76" podUID="0d2504fe-15e2-4cd0-992b-94a4e43c2c6e"
	Oct 18 09:31:15 test-preload-279124 kubelet[1147]: E1018 09:31:15.026535    1147 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760779875025592235,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 18 09:31:15 test-preload-279124 kubelet[1147]: E1018 09:31:15.026558    1147 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760779875025592235,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 18 09:31:25 test-preload-279124 kubelet[1147]: E1018 09:31:25.028277    1147 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760779885027527635,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 18 09:31:25 test-preload-279124 kubelet[1147]: E1018 09:31:25.028612    1147 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760779885027527635,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [4589afdbed40f9e20f0a72eb61778018fbcc7ff9f1671d2c698df749796eb443] <==
	I1018 09:31:11.565555       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-279124 -n test-preload-279124
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-279124 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-279124" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-279124
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-279124: (1.027383381s)
--- FAIL: TestPreload (165.07s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (71.75s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-251981 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-251981 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m6.460922641s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-251981] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-6063/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-6063/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-251981" primary control-plane node in "pause-251981" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-251981" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:37:38.773430   50299 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:37:38.773591   50299 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:37:38.773600   50299 out.go:374] Setting ErrFile to fd 2...
	I1018 09:37:38.773607   50299 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:37:38.773953   50299 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-6063/.minikube/bin
	I1018 09:37:38.774601   50299 out.go:368] Setting JSON to false
	I1018 09:37:38.775954   50299 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4809,"bootTime":1760775450,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:37:38.776089   50299 start.go:141] virtualization: kvm guest
	I1018 09:37:38.875008   50299 out.go:179] * [pause-251981] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:37:39.024068   50299 notify.go:220] Checking for updates...
	I1018 09:37:39.144294   50299 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 09:37:39.282465   50299 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:37:39.544719   50299 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-6063/kubeconfig
	I1018 09:37:39.566414   50299 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-6063/.minikube
	I1018 09:37:39.861093   50299 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:37:40.016722   50299 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:37:40.018718   50299 config.go:182] Loaded profile config "pause-251981": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:37:40.019374   50299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:37:40.019438   50299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:37:40.038822   50299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36115
	I1018 09:37:40.039501   50299 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:37:40.040206   50299 main.go:141] libmachine: Using API Version  1
	I1018 09:37:40.040229   50299 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:37:40.040682   50299 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:37:40.040903   50299 main.go:141] libmachine: (pause-251981) Calling .DriverName
	I1018 09:37:40.041243   50299 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:37:40.041680   50299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:37:40.041725   50299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:37:40.056303   50299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36091
	I1018 09:37:40.056935   50299 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:37:40.057615   50299 main.go:141] libmachine: Using API Version  1
	I1018 09:37:40.057640   50299 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:37:40.058091   50299 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:37:40.058322   50299 main.go:141] libmachine: (pause-251981) Calling .DriverName
	I1018 09:37:40.101494   50299 out.go:179] * Using the kvm2 driver based on existing profile
	I1018 09:37:40.103113   50299 start.go:305] selected driver: kvm2
	I1018 09:37:40.103134   50299 start.go:925] validating driver "kvm2" against &{Name:pause-251981 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.1 ClusterName:pause-251981 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.16 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-instal
ler:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:37:40.103322   50299 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:37:40.103789   50299 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:37:40.103877   50299 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21767-6063/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 09:37:40.121120   50299 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 09:37:40.121174   50299 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21767-6063/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 09:37:40.137510   50299 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 09:37:40.138653   50299 cni.go:84] Creating CNI manager for ""
	I1018 09:37:40.138720   50299 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 09:37:40.138789   50299 start.go:349] cluster config:
	{Name:pause-251981 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-251981 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.16 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false p
ortainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:37:40.139003   50299 iso.go:125] acquiring lock: {Name:mk5e486e8f05c541fb7f7e8ec869cafc091f385a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:37:40.141229   50299 out.go:179] * Starting "pause-251981" primary control-plane node in "pause-251981" cluster
	I1018 09:37:40.142554   50299 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:37:40.142600   50299 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-6063/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 09:37:40.142608   50299 cache.go:58] Caching tarball of preloaded images
	I1018 09:37:40.142727   50299 preload.go:233] Found /home/jenkins/minikube-integration/21767-6063/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 09:37:40.142742   50299 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:37:40.142878   50299 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/pause-251981/config.json ...
	I1018 09:37:40.143163   50299 start.go:360] acquireMachinesLock for pause-251981: {Name:mk264c321ec76ef9ad1eaece53fae2e5807c459a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1018 09:37:53.759039   50299 start.go:364] duration metric: took 13.615827337s to acquireMachinesLock for "pause-251981"
	I1018 09:37:53.759124   50299 start.go:96] Skipping create...Using existing machine configuration
	I1018 09:37:53.759149   50299 fix.go:54] fixHost starting: 
	I1018 09:37:53.759725   50299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:37:53.759778   50299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:37:53.780596   50299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38823
	I1018 09:37:53.781183   50299 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:37:53.781716   50299 main.go:141] libmachine: Using API Version  1
	I1018 09:37:53.781740   50299 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:37:53.782182   50299 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:37:53.782422   50299 main.go:141] libmachine: (pause-251981) Calling .DriverName
	I1018 09:37:53.782599   50299 main.go:141] libmachine: (pause-251981) Calling .GetState
	I1018 09:37:53.784705   50299 fix.go:112] recreateIfNeeded on pause-251981: state=Running err=<nil>
	W1018 09:37:53.784735   50299 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 09:37:53.786515   50299 out.go:252] * Updating the running kvm2 "pause-251981" VM ...
	I1018 09:37:53.786555   50299 machine.go:93] provisionDockerMachine start ...
	I1018 09:37:53.786575   50299 main.go:141] libmachine: (pause-251981) Calling .DriverName
	I1018 09:37:53.786824   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHHostname
	I1018 09:37:53.791138   50299 main.go:141] libmachine: (pause-251981) DBG | domain pause-251981 has defined MAC address 52:54:00:33:98:9d in network mk-pause-251981
	I1018 09:37:53.792627   50299 main.go:141] libmachine: (pause-251981) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:98:9d", ip: ""} in network mk-pause-251981: {Iface:virbr4 ExpiryTime:2025-10-18 10:36:33 +0000 UTC Type:0 Mac:52:54:00:33:98:9d Iaid: IPaddr:192.168.72.16 Prefix:24 Hostname:pause-251981 Clientid:01:52:54:00:33:98:9d}
	I1018 09:37:53.792662   50299 main.go:141] libmachine: (pause-251981) DBG | domain pause-251981 has defined IP address 192.168.72.16 and MAC address 52:54:00:33:98:9d in network mk-pause-251981
	I1018 09:37:53.792701   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHPort
	I1018 09:37:53.792905   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHKeyPath
	I1018 09:37:53.793346   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHKeyPath
	I1018 09:37:53.793876   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHUsername
	I1018 09:37:53.794109   50299 main.go:141] libmachine: Using SSH client type: native
	I1018 09:37:53.794424   50299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.16 22 <nil> <nil>}
	I1018 09:37:53.794438   50299 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:37:53.926829   50299 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-251981
	
	I1018 09:37:53.926863   50299 main.go:141] libmachine: (pause-251981) Calling .GetMachineName
	I1018 09:37:53.927162   50299 buildroot.go:166] provisioning hostname "pause-251981"
	I1018 09:37:53.927195   50299 main.go:141] libmachine: (pause-251981) Calling .GetMachineName
	I1018 09:37:53.927435   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHHostname
	I1018 09:37:53.931662   50299 main.go:141] libmachine: (pause-251981) DBG | domain pause-251981 has defined MAC address 52:54:00:33:98:9d in network mk-pause-251981
	I1018 09:37:53.932269   50299 main.go:141] libmachine: (pause-251981) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:98:9d", ip: ""} in network mk-pause-251981: {Iface:virbr4 ExpiryTime:2025-10-18 10:36:33 +0000 UTC Type:0 Mac:52:54:00:33:98:9d Iaid: IPaddr:192.168.72.16 Prefix:24 Hostname:pause-251981 Clientid:01:52:54:00:33:98:9d}
	I1018 09:37:53.932292   50299 main.go:141] libmachine: (pause-251981) DBG | domain pause-251981 has defined IP address 192.168.72.16 and MAC address 52:54:00:33:98:9d in network mk-pause-251981
	I1018 09:37:53.932611   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHPort
	I1018 09:37:53.932833   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHKeyPath
	I1018 09:37:53.933032   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHKeyPath
	I1018 09:37:53.933246   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHUsername
	I1018 09:37:53.933463   50299 main.go:141] libmachine: Using SSH client type: native
	I1018 09:37:53.933832   50299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.16 22 <nil> <nil>}
	I1018 09:37:53.933857   50299 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-251981 && echo "pause-251981" | sudo tee /etc/hostname
	I1018 09:37:54.094784   50299 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-251981
	
	I1018 09:37:54.094816   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHHostname
	I1018 09:37:54.098574   50299 main.go:141] libmachine: (pause-251981) DBG | domain pause-251981 has defined MAC address 52:54:00:33:98:9d in network mk-pause-251981
	I1018 09:37:54.099079   50299 main.go:141] libmachine: (pause-251981) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:98:9d", ip: ""} in network mk-pause-251981: {Iface:virbr4 ExpiryTime:2025-10-18 10:36:33 +0000 UTC Type:0 Mac:52:54:00:33:98:9d Iaid: IPaddr:192.168.72.16 Prefix:24 Hostname:pause-251981 Clientid:01:52:54:00:33:98:9d}
	I1018 09:37:54.099135   50299 main.go:141] libmachine: (pause-251981) DBG | domain pause-251981 has defined IP address 192.168.72.16 and MAC address 52:54:00:33:98:9d in network mk-pause-251981
	I1018 09:37:54.099389   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHPort
	I1018 09:37:54.099618   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHKeyPath
	I1018 09:37:54.099787   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHKeyPath
	I1018 09:37:54.099952   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHUsername
	I1018 09:37:54.100133   50299 main.go:141] libmachine: Using SSH client type: native
	I1018 09:37:54.100405   50299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.16 22 <nil> <nil>}
	I1018 09:37:54.100430   50299 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-251981' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-251981/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-251981' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:37:54.232842   50299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:37:54.232875   50299 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21767-6063/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-6063/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-6063/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-6063/.minikube}
	I1018 09:37:54.232913   50299 buildroot.go:174] setting up certificates
	I1018 09:37:54.232947   50299 provision.go:84] configureAuth start
	I1018 09:37:54.232962   50299 main.go:141] libmachine: (pause-251981) Calling .GetMachineName
	I1018 09:37:54.233348   50299 main.go:141] libmachine: (pause-251981) Calling .GetIP
	I1018 09:37:54.237789   50299 main.go:141] libmachine: (pause-251981) DBG | domain pause-251981 has defined MAC address 52:54:00:33:98:9d in network mk-pause-251981
	I1018 09:37:54.238309   50299 main.go:141] libmachine: (pause-251981) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:98:9d", ip: ""} in network mk-pause-251981: {Iface:virbr4 ExpiryTime:2025-10-18 10:36:33 +0000 UTC Type:0 Mac:52:54:00:33:98:9d Iaid: IPaddr:192.168.72.16 Prefix:24 Hostname:pause-251981 Clientid:01:52:54:00:33:98:9d}
	I1018 09:37:54.238343   50299 main.go:141] libmachine: (pause-251981) DBG | domain pause-251981 has defined IP address 192.168.72.16 and MAC address 52:54:00:33:98:9d in network mk-pause-251981
	I1018 09:37:54.238590   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHHostname
	I1018 09:37:54.242082   50299 main.go:141] libmachine: (pause-251981) DBG | domain pause-251981 has defined MAC address 52:54:00:33:98:9d in network mk-pause-251981
	I1018 09:37:54.242753   50299 main.go:141] libmachine: (pause-251981) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:98:9d", ip: ""} in network mk-pause-251981: {Iface:virbr4 ExpiryTime:2025-10-18 10:36:33 +0000 UTC Type:0 Mac:52:54:00:33:98:9d Iaid: IPaddr:192.168.72.16 Prefix:24 Hostname:pause-251981 Clientid:01:52:54:00:33:98:9d}
	I1018 09:37:54.242787   50299 main.go:141] libmachine: (pause-251981) DBG | domain pause-251981 has defined IP address 192.168.72.16 and MAC address 52:54:00:33:98:9d in network mk-pause-251981
	I1018 09:37:54.242969   50299 provision.go:143] copyHostCerts
	I1018 09:37:54.243067   50299 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-6063/.minikube/ca.pem, removing ...
	I1018 09:37:54.243094   50299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-6063/.minikube/ca.pem
	I1018 09:37:54.243169   50299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-6063/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-6063/.minikube/ca.pem (1078 bytes)
	I1018 09:37:54.243295   50299 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-6063/.minikube/cert.pem, removing ...
	I1018 09:37:54.243318   50299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-6063/.minikube/cert.pem
	I1018 09:37:54.243359   50299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-6063/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-6063/.minikube/cert.pem (1123 bytes)
	I1018 09:37:54.243454   50299 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-6063/.minikube/key.pem, removing ...
	I1018 09:37:54.243466   50299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-6063/.minikube/key.pem
	I1018 09:37:54.243499   50299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-6063/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-6063/.minikube/key.pem (1675 bytes)
	I1018 09:37:54.243577   50299 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-6063/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-6063/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-6063/.minikube/certs/ca-key.pem org=jenkins.pause-251981 san=[127.0.0.1 192.168.72.16 localhost minikube pause-251981]
	I1018 09:37:54.517293   50299 provision.go:177] copyRemoteCerts
	I1018 09:37:54.517365   50299 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:37:54.517395   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHHostname
	I1018 09:37:54.521263   50299 main.go:141] libmachine: (pause-251981) DBG | domain pause-251981 has defined MAC address 52:54:00:33:98:9d in network mk-pause-251981
	I1018 09:37:54.521763   50299 main.go:141] libmachine: (pause-251981) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:98:9d", ip: ""} in network mk-pause-251981: {Iface:virbr4 ExpiryTime:2025-10-18 10:36:33 +0000 UTC Type:0 Mac:52:54:00:33:98:9d Iaid: IPaddr:192.168.72.16 Prefix:24 Hostname:pause-251981 Clientid:01:52:54:00:33:98:9d}
	I1018 09:37:54.521800   50299 main.go:141] libmachine: (pause-251981) DBG | domain pause-251981 has defined IP address 192.168.72.16 and MAC address 52:54:00:33:98:9d in network mk-pause-251981
	I1018 09:37:54.522149   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHPort
	I1018 09:37:54.522401   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHKeyPath
	I1018 09:37:54.522671   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHUsername
	I1018 09:37:54.522868   50299 sshutil.go:53] new ssh client: &{IP:192.168.72.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/pause-251981/id_rsa Username:docker}
	I1018 09:37:54.622812   50299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:37:54.659961   50299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1018 09:37:54.701376   50299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 09:37:54.742332   50299 provision.go:87] duration metric: took 509.369338ms to configureAuth
	I1018 09:37:54.742368   50299 buildroot.go:189] setting minikube options for container-runtime
	I1018 09:37:54.742595   50299 config.go:182] Loaded profile config "pause-251981": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:37:54.742671   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHHostname
	I1018 09:37:54.745668   50299 main.go:141] libmachine: (pause-251981) DBG | domain pause-251981 has defined MAC address 52:54:00:33:98:9d in network mk-pause-251981
	I1018 09:37:54.746123   50299 main.go:141] libmachine: (pause-251981) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:98:9d", ip: ""} in network mk-pause-251981: {Iface:virbr4 ExpiryTime:2025-10-18 10:36:33 +0000 UTC Type:0 Mac:52:54:00:33:98:9d Iaid: IPaddr:192.168.72.16 Prefix:24 Hostname:pause-251981 Clientid:01:52:54:00:33:98:9d}
	I1018 09:37:54.746147   50299 main.go:141] libmachine: (pause-251981) DBG | domain pause-251981 has defined IP address 192.168.72.16 and MAC address 52:54:00:33:98:9d in network mk-pause-251981
	I1018 09:37:54.746426   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHPort
	I1018 09:37:54.746642   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHKeyPath
	I1018 09:37:54.746816   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHKeyPath
	I1018 09:37:54.747030   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHUsername
	I1018 09:37:54.747225   50299 main.go:141] libmachine: Using SSH client type: native
	I1018 09:37:54.747498   50299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.16 22 <nil> <nil>}
	I1018 09:37:54.747521   50299 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:38:00.341428   50299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:38:00.341467   50299 machine.go:96] duration metric: took 6.554903069s to provisionDockerMachine
	I1018 09:38:00.341483   50299 start.go:293] postStartSetup for "pause-251981" (driver="kvm2")
	I1018 09:38:00.341496   50299 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:38:00.341521   50299 main.go:141] libmachine: (pause-251981) Calling .DriverName
	I1018 09:38:00.341896   50299 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:38:00.341945   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHHostname
	I1018 09:38:00.345549   50299 main.go:141] libmachine: (pause-251981) DBG | domain pause-251981 has defined MAC address 52:54:00:33:98:9d in network mk-pause-251981
	I1018 09:38:00.346068   50299 main.go:141] libmachine: (pause-251981) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:98:9d", ip: ""} in network mk-pause-251981: {Iface:virbr4 ExpiryTime:2025-10-18 10:36:33 +0000 UTC Type:0 Mac:52:54:00:33:98:9d Iaid: IPaddr:192.168.72.16 Prefix:24 Hostname:pause-251981 Clientid:01:52:54:00:33:98:9d}
	I1018 09:38:00.346113   50299 main.go:141] libmachine: (pause-251981) DBG | domain pause-251981 has defined IP address 192.168.72.16 and MAC address 52:54:00:33:98:9d in network mk-pause-251981
	I1018 09:38:00.346315   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHPort
	I1018 09:38:00.346519   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHKeyPath
	I1018 09:38:00.346676   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHUsername
	I1018 09:38:00.346817   50299 sshutil.go:53] new ssh client: &{IP:192.168.72.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/pause-251981/id_rsa Username:docker}
	I1018 09:38:00.440523   50299 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:38:00.446013   50299 info.go:137] Remote host: Buildroot 2025.02
	I1018 09:38:00.446048   50299 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-6063/.minikube/addons for local assets ...
	I1018 09:38:00.446113   50299 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-6063/.minikube/files for local assets ...
	I1018 09:38:00.446183   50299 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-6063/.minikube/files/etc/ssl/certs/99562.pem -> 99562.pem in /etc/ssl/certs
	I1018 09:38:00.446277   50299 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:38:00.460671   50299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/files/etc/ssl/certs/99562.pem --> /etc/ssl/certs/99562.pem (1708 bytes)
	I1018 09:38:00.496607   50299 start.go:296] duration metric: took 155.107345ms for postStartSetup
	I1018 09:38:00.496648   50299 fix.go:56] duration metric: took 6.737506516s for fixHost
	I1018 09:38:00.496762   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHHostname
	I1018 09:38:00.500670   50299 main.go:141] libmachine: (pause-251981) DBG | domain pause-251981 has defined MAC address 52:54:00:33:98:9d in network mk-pause-251981
	I1018 09:38:00.501149   50299 main.go:141] libmachine: (pause-251981) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:98:9d", ip: ""} in network mk-pause-251981: {Iface:virbr4 ExpiryTime:2025-10-18 10:36:33 +0000 UTC Type:0 Mac:52:54:00:33:98:9d Iaid: IPaddr:192.168.72.16 Prefix:24 Hostname:pause-251981 Clientid:01:52:54:00:33:98:9d}
	I1018 09:38:00.501196   50299 main.go:141] libmachine: (pause-251981) DBG | domain pause-251981 has defined IP address 192.168.72.16 and MAC address 52:54:00:33:98:9d in network mk-pause-251981
	I1018 09:38:00.501456   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHPort
	I1018 09:38:00.501712   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHKeyPath
	I1018 09:38:00.501947   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHKeyPath
	I1018 09:38:00.502104   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHUsername
	I1018 09:38:00.502315   50299 main.go:141] libmachine: Using SSH client type: native
	I1018 09:38:00.502626   50299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.16 22 <nil> <nil>}
	I1018 09:38:00.502639   50299 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1018 09:38:00.624456   50299 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760780280.619323183
	
	I1018 09:38:00.624479   50299 fix.go:216] guest clock: 1760780280.619323183
	I1018 09:38:00.624486   50299 fix.go:229] Guest: 2025-10-18 09:38:00.619323183 +0000 UTC Remote: 2025-10-18 09:38:00.496652657 +0000 UTC m=+21.777305690 (delta=122.670526ms)
	I1018 09:38:00.624506   50299 fix.go:200] guest clock delta is within tolerance: 122.670526ms
	I1018 09:38:00.624513   50299 start.go:83] releasing machines lock for "pause-251981", held for 6.865443092s
	I1018 09:38:00.624542   50299 main.go:141] libmachine: (pause-251981) Calling .DriverName
	I1018 09:38:00.624877   50299 main.go:141] libmachine: (pause-251981) Calling .GetIP
	I1018 09:38:00.629157   50299 main.go:141] libmachine: (pause-251981) DBG | domain pause-251981 has defined MAC address 52:54:00:33:98:9d in network mk-pause-251981
	I1018 09:38:00.629648   50299 main.go:141] libmachine: (pause-251981) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:98:9d", ip: ""} in network mk-pause-251981: {Iface:virbr4 ExpiryTime:2025-10-18 10:36:33 +0000 UTC Type:0 Mac:52:54:00:33:98:9d Iaid: IPaddr:192.168.72.16 Prefix:24 Hostname:pause-251981 Clientid:01:52:54:00:33:98:9d}
	I1018 09:38:00.629679   50299 main.go:141] libmachine: (pause-251981) DBG | domain pause-251981 has defined IP address 192.168.72.16 and MAC address 52:54:00:33:98:9d in network mk-pause-251981
	I1018 09:38:00.630003   50299 main.go:141] libmachine: (pause-251981) Calling .DriverName
	I1018 09:38:00.630801   50299 main.go:141] libmachine: (pause-251981) Calling .DriverName
	I1018 09:38:00.631068   50299 main.go:141] libmachine: (pause-251981) Calling .DriverName
	I1018 09:38:00.631185   50299 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:38:00.631260   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHHostname
	I1018 09:38:00.631346   50299 ssh_runner.go:195] Run: cat /version.json
	I1018 09:38:00.631376   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHHostname
	I1018 09:38:00.635262   50299 main.go:141] libmachine: (pause-251981) DBG | domain pause-251981 has defined MAC address 52:54:00:33:98:9d in network mk-pause-251981
	I1018 09:38:00.635429   50299 main.go:141] libmachine: (pause-251981) DBG | domain pause-251981 has defined MAC address 52:54:00:33:98:9d in network mk-pause-251981
	I1018 09:38:00.635723   50299 main.go:141] libmachine: (pause-251981) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:98:9d", ip: ""} in network mk-pause-251981: {Iface:virbr4 ExpiryTime:2025-10-18 10:36:33 +0000 UTC Type:0 Mac:52:54:00:33:98:9d Iaid: IPaddr:192.168.72.16 Prefix:24 Hostname:pause-251981 Clientid:01:52:54:00:33:98:9d}
	I1018 09:38:00.635751   50299 main.go:141] libmachine: (pause-251981) DBG | domain pause-251981 has defined IP address 192.168.72.16 and MAC address 52:54:00:33:98:9d in network mk-pause-251981
	I1018 09:38:00.635827   50299 main.go:141] libmachine: (pause-251981) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:98:9d", ip: ""} in network mk-pause-251981: {Iface:virbr4 ExpiryTime:2025-10-18 10:36:33 +0000 UTC Type:0 Mac:52:54:00:33:98:9d Iaid: IPaddr:192.168.72.16 Prefix:24 Hostname:pause-251981 Clientid:01:52:54:00:33:98:9d}
	I1018 09:38:00.635861   50299 main.go:141] libmachine: (pause-251981) DBG | domain pause-251981 has defined IP address 192.168.72.16 and MAC address 52:54:00:33:98:9d in network mk-pause-251981
	I1018 09:38:00.636021   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHPort
	I1018 09:38:00.636241   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHKeyPath
	I1018 09:38:00.636283   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHPort
	I1018 09:38:00.636446   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHKeyPath
	I1018 09:38:00.636522   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHUsername
	I1018 09:38:00.636586   50299 main.go:141] libmachine: (pause-251981) Calling .GetSSHUsername
	I1018 09:38:00.636657   50299 sshutil.go:53] new ssh client: &{IP:192.168.72.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/pause-251981/id_rsa Username:docker}
	I1018 09:38:00.636710   50299 sshutil.go:53] new ssh client: &{IP:192.168.72.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/pause-251981/id_rsa Username:docker}
	I1018 09:38:00.721995   50299 ssh_runner.go:195] Run: systemctl --version
	I1018 09:38:00.751355   50299 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:38:00.910071   50299 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:38:00.920017   50299 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:38:00.920093   50299 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:38:00.935331   50299 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 09:38:00.935361   50299 start.go:495] detecting cgroup driver to use...
	I1018 09:38:00.935432   50299 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:38:00.961001   50299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:38:00.985057   50299 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:38:00.985117   50299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:38:01.009685   50299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:38:01.029588   50299 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:38:01.285083   50299 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:38:01.486593   50299 docker.go:234] disabling docker service ...
	I1018 09:38:01.486687   50299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:38:01.524011   50299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:38:01.541168   50299 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:38:01.745462   50299 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:38:01.922769   50299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:38:01.941884   50299 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:38:01.966891   50299 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:38:01.966980   50299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:38:01.980998   50299 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 09:38:01.981085   50299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:38:01.995064   50299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:38:02.008848   50299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:38:02.024055   50299 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:38:02.040350   50299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:38:02.056634   50299 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:38:02.075463   50299 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:38:02.093250   50299 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:38:02.108276   50299 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:38:02.121478   50299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:38:02.305713   50299 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:38:07.661238   50299 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.355478403s)
	I1018 09:38:07.661269   50299 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:38:07.661331   50299 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:38:07.669692   50299 start.go:563] Will wait 60s for crictl version
	I1018 09:38:07.669767   50299 ssh_runner.go:195] Run: which crictl
	I1018 09:38:07.675587   50299 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1018 09:38:07.723330   50299 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1018 09:38:07.723449   50299 ssh_runner.go:195] Run: crio --version
	I1018 09:38:07.764578   50299 ssh_runner.go:195] Run: crio --version
	I1018 09:38:07.805004   50299 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1018 09:38:07.806403   50299 main.go:141] libmachine: (pause-251981) Calling .GetIP
	I1018 09:38:07.810399   50299 main.go:141] libmachine: (pause-251981) DBG | domain pause-251981 has defined MAC address 52:54:00:33:98:9d in network mk-pause-251981
	I1018 09:38:07.810999   50299 main.go:141] libmachine: (pause-251981) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:98:9d", ip: ""} in network mk-pause-251981: {Iface:virbr4 ExpiryTime:2025-10-18 10:36:33 +0000 UTC Type:0 Mac:52:54:00:33:98:9d Iaid: IPaddr:192.168.72.16 Prefix:24 Hostname:pause-251981 Clientid:01:52:54:00:33:98:9d}
	I1018 09:38:07.811031   50299 main.go:141] libmachine: (pause-251981) DBG | domain pause-251981 has defined IP address 192.168.72.16 and MAC address 52:54:00:33:98:9d in network mk-pause-251981
	I1018 09:38:07.811259   50299 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1018 09:38:07.817520   50299 kubeadm.go:883] updating cluster {Name:pause-251981 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-251981 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.16 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidi
a-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:38:07.817654   50299 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:38:07.817718   50299 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:38:07.875211   50299 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:38:07.875241   50299 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:38:07.875295   50299 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:38:07.932718   50299 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:38:07.932745   50299 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:38:07.932754   50299 kubeadm.go:934] updating node { 192.168.72.16 8443 v1.34.1 crio true true} ...
	I1018 09:38:07.932874   50299 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-251981 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-251981 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:38:07.932977   50299 ssh_runner.go:195] Run: crio config
	I1018 09:38:08.003042   50299 cni.go:84] Creating CNI manager for ""
	I1018 09:38:08.003071   50299 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 09:38:08.003093   50299 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:38:08.003121   50299 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.16 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-251981 NodeName:pause-251981 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:38:08.003275   50299 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.16
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-251981"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.16"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.16"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:38:08.003350   50299 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:38:08.020541   50299 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:38:08.020623   50299 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:38:08.036222   50299 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1018 09:38:08.064941   50299 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:38:08.094637   50299 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1018 09:38:08.123427   50299 ssh_runner.go:195] Run: grep 192.168.72.16	control-plane.minikube.internal$ /etc/hosts
	I1018 09:38:08.129452   50299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:38:08.316819   50299 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:38:08.338782   50299 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/pause-251981 for IP: 192.168.72.16
	I1018 09:38:08.338816   50299 certs.go:195] generating shared ca certs ...
	I1018 09:38:08.338838   50299 certs.go:227] acquiring lock for ca certs: {Name:mk72b8eadb27773dc6399bddc4b95ee0664cbf67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:38:08.339036   50299 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-6063/.minikube/ca.key
	I1018 09:38:08.339093   50299 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-6063/.minikube/proxy-client-ca.key
	I1018 09:38:08.339106   50299 certs.go:257] generating profile certs ...
	I1018 09:38:08.339255   50299 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/pause-251981/client.key
	I1018 09:38:08.339327   50299 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/pause-251981/apiserver.key.5e5aaa75
	I1018 09:38:08.339378   50299 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/pause-251981/proxy-client.key
	I1018 09:38:08.339518   50299 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-6063/.minikube/certs/9956.pem (1338 bytes)
	W1018 09:38:08.339556   50299 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-6063/.minikube/certs/9956_empty.pem, impossibly tiny 0 bytes
	I1018 09:38:08.339568   50299 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-6063/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:38:08.339600   50299 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-6063/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:38:08.339631   50299 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-6063/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:38:08.339663   50299 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-6063/.minikube/certs/key.pem (1675 bytes)
	I1018 09:38:08.339719   50299 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-6063/.minikube/files/etc/ssl/certs/99562.pem (1708 bytes)
	I1018 09:38:08.340611   50299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:38:08.380598   50299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 09:38:08.436184   50299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:38:08.523821   50299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:38:08.641718   50299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/pause-251981/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 09:38:08.713291   50299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/pause-251981/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:38:08.841713   50299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/pause-251981/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:38:08.949542   50299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/pause-251981/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 09:38:09.049826   50299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:38:09.114210   50299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/certs/9956.pem --> /usr/share/ca-certificates/9956.pem (1338 bytes)
	I1018 09:38:09.163225   50299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/files/etc/ssl/certs/99562.pem --> /usr/share/ca-certificates/99562.pem (1708 bytes)
	I1018 09:38:09.214910   50299 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:38:09.263497   50299 ssh_runner.go:195] Run: openssl version
	I1018 09:38:09.280249   50299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:38:09.348438   50299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:38:09.370541   50299 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:38:09.370613   50299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:38:09.391102   50299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:38:09.423736   50299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9956.pem && ln -fs /usr/share/ca-certificates/9956.pem /etc/ssl/certs/9956.pem"
	I1018 09:38:09.523152   50299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9956.pem
	I1018 09:38:09.532821   50299 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 08:38 /usr/share/ca-certificates/9956.pem
	I1018 09:38:09.532907   50299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9956.pem
	I1018 09:38:09.575637   50299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9956.pem /etc/ssl/certs/51391683.0"
	I1018 09:38:09.627244   50299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/99562.pem && ln -fs /usr/share/ca-certificates/99562.pem /etc/ssl/certs/99562.pem"
	I1018 09:38:09.676778   50299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/99562.pem
	I1018 09:38:09.692567   50299 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 08:38 /usr/share/ca-certificates/99562.pem
	I1018 09:38:09.692645   50299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/99562.pem
	I1018 09:38:09.712679   50299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/99562.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:38:09.750400   50299 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:38:09.772249   50299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 09:38:09.785489   50299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 09:38:09.798799   50299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 09:38:09.816287   50299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 09:38:09.834138   50299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 09:38:09.851909   50299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 09:38:09.865811   50299 kubeadm.go:400] StartCluster: {Name:pause-251981 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-251981 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.16 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-g
pu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:38:09.865974   50299 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:38:09.866067   50299 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:38:10.008841   50299 cri.go:89] found id: "fc012585626724b3556d7a6649f793af6fb0753ed46c789bce7cb3ea391413bb"
	I1018 09:38:10.008874   50299 cri.go:89] found id: "4129f2037f9f283b2dcc43160d4a1cceb6aa28b44ee94c719f0b35e305d9b153"
	I1018 09:38:10.008880   50299 cri.go:89] found id: "201cd2c1e158f4cd558f4e00ed2a29b45d3ac5d6699b526aa3173435fc58e0e7"
	I1018 09:38:10.008891   50299 cri.go:89] found id: "02ee425859eef16741c99f5436243e9681dfb3da34b4721472f7361d29ba47dc"
	I1018 09:38:10.008895   50299 cri.go:89] found id: "8a58eff517402787ff5f9c7a733a3863ce25d66a53c0f5e8fd09bd24cfa911d5"
	I1018 09:38:10.008900   50299 cri.go:89] found id: "b2a9247b81b38d66905d17b1c7125eaaca22be0c11e84329c01545bf8a63d3f7"
	I1018 09:38:10.008904   50299 cri.go:89] found id: "a7c1e6744ecf6491aa07a3f4fe3bacfbcf9a3bf2ea05b19597103b52582c9cc8"
	I1018 09:38:10.008908   50299 cri.go:89] found id: "92f511266e63ef40b9488d80b5ebd3fe0c035e641634ce8baad15890e2e55b24"
	I1018 09:38:10.008912   50299 cri.go:89] found id: "cb79c1b4da783f7f285f67013fe29663bdaa35310af35652f81a5fedb1efdaf0"
	I1018 09:38:10.008940   50299 cri.go:89] found id: "a5280ebc74b43a105559a2e112f577fd5d14088a8bbd269813a39a36f3d490d2"
	I1018 09:38:10.008947   50299 cri.go:89] found id: "3bb69a3ac84bc3f90569e6445c8862e4317994bfaead9574469288f1091a7739"
	I1018 09:38:10.008951   50299 cri.go:89] found id: ""
	I1018 09:38:10.009009   50299 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-251981 -n pause-251981
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-251981 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-251981 logs -n 25: (1.788555623s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────
────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                  ARGS                                                                                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────
────────┼─────────────────────┤
	│ delete  │ -p cilium-081586                                                                                                                                                                                                                                                        │ cilium-081586             │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │ 18 Oct 25 09:35 UTC │
	│ start   │ -p stopped-upgrade-253577 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                          │ stopped-upgrade-253577    │ jenkins │ v1.32.0 │ 18 Oct 25 09:35 UTC │ 18 Oct 25 09:36 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-947647 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                                                                             │ running-upgrade-947647    │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │                     │
	│ delete  │ -p running-upgrade-947647                                                                                                                                                                                                                                               │ running-upgrade-947647    │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │ 18 Oct 25 09:35 UTC │
	│ start   │ -p pause-251981 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                     │ pause-251981              │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │ 18 Oct 25 09:37 UTC │
	│ ssh     │ -p NoKubernetes-914044 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                                                 │ NoKubernetes-914044       │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │                     │
	│ stop    │ -p NoKubernetes-914044                                                                                                                                                                                                                                                  │ NoKubernetes-914044       │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │ 18 Oct 25 09:35 UTC │
	│ start   │ -p NoKubernetes-914044 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                              │ NoKubernetes-914044       │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │ 18 Oct 25 09:36 UTC │
	│ stop    │ stopped-upgrade-253577 stop                                                                                                                                                                                                                                             │ stopped-upgrade-253577    │ jenkins │ v1.32.0 │ 18 Oct 25 09:36 UTC │ 18 Oct 25 09:36 UTC │
	│ start   │ -p stopped-upgrade-253577 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                      │ stopped-upgrade-253577    │ jenkins │ v1.37.0 │ 18 Oct 25 09:36 UTC │ 18 Oct 25 09:37 UTC │
	│ ssh     │ -p NoKubernetes-914044 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                                                 │ NoKubernetes-914044       │ jenkins │ v1.37.0 │ 18 Oct 25 09:36 UTC │                     │
	│ delete  │ -p NoKubernetes-914044                                                                                                                                                                                                                                                  │ NoKubernetes-914044       │ jenkins │ v1.37.0 │ 18 Oct 25 09:36 UTC │ 18 Oct 25 09:36 UTC │
	│ start   │ -p cert-expiration-209551 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                        │ cert-expiration-209551    │ jenkins │ v1.37.0 │ 18 Oct 25 09:36 UTC │ 18 Oct 25 09:37 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-253577 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                                                                             │ stopped-upgrade-253577    │ jenkins │ v1.37.0 │ 18 Oct 25 09:37 UTC │                     │
	│ delete  │ -p stopped-upgrade-253577                                                                                                                                                                                                                                               │ stopped-upgrade-253577    │ jenkins │ v1.37.0 │ 18 Oct 25 09:37 UTC │ 18 Oct 25 09:37 UTC │
	│ start   │ -p force-systemd-flag-850953 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                   │ force-systemd-flag-850953 │ jenkins │ v1.37.0 │ 18 Oct 25 09:37 UTC │ 18 Oct 25 09:38 UTC │
	│ start   │ -p pause-251981 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                              │ pause-251981              │ jenkins │ v1.37.0 │ 18 Oct 25 09:37 UTC │ 18 Oct 25 09:38 UTC │
	│ delete  │ -p kubernetes-upgrade-178467                                                                                                                                                                                                                                            │ kubernetes-upgrade-178467 │ jenkins │ v1.37.0 │ 18 Oct 25 09:37 UTC │ 18 Oct 25 09:37 UTC │
	│ start   │ -p cert-options-586276 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                     │ cert-options-586276       │ jenkins │ v1.37.0 │ 18 Oct 25 09:37 UTC │ 18 Oct 25 09:38 UTC │
	│ ssh     │ force-systemd-flag-850953 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                                                    │ force-systemd-flag-850953 │ jenkins │ v1.37.0 │ 18 Oct 25 09:38 UTC │ 18 Oct 25 09:38 UTC │
	│ delete  │ -p force-systemd-flag-850953                                                                                                                                                                                                                                            │ force-systemd-flag-850953 │ jenkins │ v1.37.0 │ 18 Oct 25 09:38 UTC │ 18 Oct 25 09:38 UTC │
	│ start   │ -p old-k8s-version-874951 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0 │ old-k8s-version-874951    │ jenkins │ v1.37.0 │ 18 Oct 25 09:38 UTC │                     │
	│ ssh     │ cert-options-586276 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                                             │ cert-options-586276       │ jenkins │ v1.37.0 │ 18 Oct 25 09:38 UTC │ 18 Oct 25 09:38 UTC │
	│ ssh     │ -p cert-options-586276 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                                           │ cert-options-586276       │ jenkins │ v1.37.0 │ 18 Oct 25 09:38 UTC │ 18 Oct 25 09:38 UTC │
	│ delete  │ -p cert-options-586276                                                                                                                                                                                                                                                  │ cert-options-586276       │ jenkins │ v1.37.0 │ 18 Oct 25 09:38 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────
────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:38:16
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:38:16.736867   51011 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:38:16.737158   51011 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:38:16.737168   51011 out.go:374] Setting ErrFile to fd 2...
	I1018 09:38:16.737172   51011 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:38:16.737472   51011 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-6063/.minikube/bin
	I1018 09:38:16.738002   51011 out.go:368] Setting JSON to false
	I1018 09:38:16.738966   51011 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4847,"bootTime":1760775450,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:38:16.739060   51011 start.go:141] virtualization: kvm guest
	I1018 09:38:16.741239   51011 out.go:179] * [old-k8s-version-874951] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:38:16.742897   51011 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 09:38:16.742946   51011 notify.go:220] Checking for updates...
	I1018 09:38:16.746480   51011 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:38:16.748081   51011 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-6063/kubeconfig
	I1018 09:38:16.749510   51011 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-6063/.minikube
	I1018 09:38:16.751147   51011 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:38:16.752558   51011 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:38:16.754537   51011 config.go:182] Loaded profile config "cert-expiration-209551": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:38:16.754677   51011 config.go:182] Loaded profile config "cert-options-586276": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:38:16.754821   51011 config.go:182] Loaded profile config "pause-251981": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:38:16.754950   51011 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:38:16.791967   51011 out.go:179] * Using the kvm2 driver based on user configuration
	I1018 09:38:16.793637   51011 start.go:305] selected driver: kvm2
	I1018 09:38:16.793663   51011 start.go:925] validating driver "kvm2" against <nil>
	I1018 09:38:16.793678   51011 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:38:16.794439   51011 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:38:16.794549   51011 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21767-6063/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 09:38:16.810477   51011 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 09:38:16.810508   51011 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21767-6063/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 09:38:16.827246   51011 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 09:38:16.827300   51011 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 09:38:16.827663   51011 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:38:16.827705   51011 cni.go:84] Creating CNI manager for ""
	I1018 09:38:16.827760   51011 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 09:38:16.827775   51011 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1018 09:38:16.827832   51011 start.go:349] cluster config:
	{Name:old-k8s-version-874951 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-874951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHA
gentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:38:16.827989   51011 iso.go:125] acquiring lock: {Name:mk5e486e8f05c541fb7f7e8ec869cafc091f385a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:38:16.831189   51011 out.go:179] * Starting "old-k8s-version-874951" primary control-plane node in "old-k8s-version-874951" cluster
	I1018 09:38:14.030533   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has defined MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:14.031175   50493 main.go:141] libmachine: (cert-options-586276) DBG | no network interface addresses found for domain cert-options-586276 (source=lease)
	I1018 09:38:14.031196   50493 main.go:141] libmachine: (cert-options-586276) DBG | trying to list again with source=arp
	I1018 09:38:14.031494   50493 main.go:141] libmachine: (cert-options-586276) DBG | unable to find current IP address of domain cert-options-586276 in network mk-cert-options-586276 (interfaces detected: [])
	I1018 09:38:14.031548   50493 main.go:141] libmachine: (cert-options-586276) DBG | I1018 09:38:14.031477   50688 retry.go:31] will retry after 2.647768911s: waiting for domain to come up
	I1018 09:38:16.681011   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has defined MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:16.681593   50493 main.go:141] libmachine: (cert-options-586276) DBG | no network interface addresses found for domain cert-options-586276 (source=lease)
	I1018 09:38:16.681608   50493 main.go:141] libmachine: (cert-options-586276) DBG | trying to list again with source=arp
	I1018 09:38:16.681996   50493 main.go:141] libmachine: (cert-options-586276) DBG | unable to find current IP address of domain cert-options-586276 in network mk-cert-options-586276 (interfaces detected: [])
	I1018 09:38:16.682052   50493 main.go:141] libmachine: (cert-options-586276) DBG | I1018 09:38:16.681997   50688 retry.go:31] will retry after 4.528556043s: waiting for domain to come up
	I1018 09:38:16.832677   51011 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 09:38:16.832771   51011 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-6063/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1018 09:38:16.832784   51011 cache.go:58] Caching tarball of preloaded images
	I1018 09:38:16.832941   51011 preload.go:233] Found /home/jenkins/minikube-integration/21767-6063/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 09:38:16.832957   51011 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1018 09:38:16.833066   51011 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/old-k8s-version-874951/config.json ...
	I1018 09:38:16.833089   51011 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/old-k8s-version-874951/config.json: {Name:mk77f15a692d36b3cf87a770131dd6b9dbccecd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:38:16.833254   51011 start.go:360] acquireMachinesLock for old-k8s-version-874951: {Name:mk264c321ec76ef9ad1eaece53fae2e5807c459a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1018 09:38:21.212525   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has defined MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:21.213395   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has current primary IP address 192.168.50.94 and MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:21.213427   50493 main.go:141] libmachine: (cert-options-586276) found domain IP: 192.168.50.94
	I1018 09:38:21.213442   50493 main.go:141] libmachine: (cert-options-586276) reserving static IP address...
	I1018 09:38:21.213899   50493 main.go:141] libmachine: (cert-options-586276) DBG | unable to find host DHCP lease matching {name: "cert-options-586276", mac: "52:54:00:a3:b2:c5", ip: "192.168.50.94"} in network mk-cert-options-586276
	I1018 09:38:21.463469   50493 main.go:141] libmachine: (cert-options-586276) reserved static IP address 192.168.50.94 for domain cert-options-586276
	I1018 09:38:21.463485   50493 main.go:141] libmachine: (cert-options-586276) waiting for SSH...
	I1018 09:38:21.463494   50493 main.go:141] libmachine: (cert-options-586276) DBG | Getting to WaitForSSH function...
	I1018 09:38:21.467424   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has defined MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:21.468023   50493 main.go:141] libmachine: (cert-options-586276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b2:c5", ip: ""} in network mk-cert-options-586276: {Iface:virbr2 ExpiryTime:2025-10-18 10:38:17 +0000 UTC Type:0 Mac:52:54:00:a3:b2:c5 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a3:b2:c5}
	I1018 09:38:21.468052   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has defined IP address 192.168.50.94 and MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:21.468318   50493 main.go:141] libmachine: (cert-options-586276) DBG | Using SSH client type: external
	I1018 09:38:21.468340   50493 main.go:141] libmachine: (cert-options-586276) DBG | Using SSH private key: /home/jenkins/minikube-integration/21767-6063/.minikube/machines/cert-options-586276/id_rsa (-rw-------)
	I1018 09:38:21.468374   50493 main.go:141] libmachine: (cert-options-586276) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.94 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21767-6063/.minikube/machines/cert-options-586276/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1018 09:38:21.468400   50493 main.go:141] libmachine: (cert-options-586276) DBG | About to run SSH command:
	I1018 09:38:21.468414   50493 main.go:141] libmachine: (cert-options-586276) DBG | exit 0
	I1018 09:38:21.599358   50493 main.go:141] libmachine: (cert-options-586276) DBG | SSH cmd err, output: <nil>: 
	I1018 09:38:21.599630   50493 main.go:141] libmachine: (cert-options-586276) domain creation complete
	I1018 09:38:21.600154   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetConfigRaw
	I1018 09:38:21.600736   50493 main.go:141] libmachine: (cert-options-586276) Calling .DriverName
	I1018 09:38:21.600962   50493 main.go:141] libmachine: (cert-options-586276) Calling .DriverName
	I1018 09:38:21.601205   50493 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1018 09:38:21.601214   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetState
	I1018 09:38:21.603265   50493 main.go:141] libmachine: Detecting operating system of created instance...
	I1018 09:38:21.603274   50493 main.go:141] libmachine: Waiting for SSH to be available...
	I1018 09:38:21.603280   50493 main.go:141] libmachine: Getting to WaitForSSH function...
	I1018 09:38:21.603287   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHHostname
	I1018 09:38:21.606347   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has defined MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:21.606720   50493 main.go:141] libmachine: (cert-options-586276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b2:c5", ip: ""} in network mk-cert-options-586276: {Iface:virbr2 ExpiryTime:2025-10-18 10:38:17 +0000 UTC Type:0 Mac:52:54:00:a3:b2:c5 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:cert-options-586276 Clientid:01:52:54:00:a3:b2:c5}
	I1018 09:38:21.606742   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has defined IP address 192.168.50.94 and MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:21.607017   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHPort
	I1018 09:38:21.607191   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHKeyPath
	I1018 09:38:21.607374   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHKeyPath
	I1018 09:38:21.607476   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHUsername
	I1018 09:38:21.607600   50493 main.go:141] libmachine: Using SSH client type: native
	I1018 09:38:21.607911   50493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.94 22 <nil> <nil>}
	I1018 09:38:21.607935   50493 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1018 09:38:21.719871   50493 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:38:21.719887   50493 main.go:141] libmachine: Detecting the provisioner...
	I1018 09:38:21.719896   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHHostname
	I1018 09:38:21.723432   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has defined MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:21.723795   50493 main.go:141] libmachine: (cert-options-586276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b2:c5", ip: ""} in network mk-cert-options-586276: {Iface:virbr2 ExpiryTime:2025-10-18 10:38:17 +0000 UTC Type:0 Mac:52:54:00:a3:b2:c5 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:cert-options-586276 Clientid:01:52:54:00:a3:b2:c5}
	I1018 09:38:21.723821   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has defined IP address 192.168.50.94 and MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:21.724024   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHPort
	I1018 09:38:21.724258   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHKeyPath
	I1018 09:38:21.724417   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHKeyPath
	I1018 09:38:21.724566   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHUsername
	I1018 09:38:21.724692   50493 main.go:141] libmachine: Using SSH client type: native
	I1018 09:38:21.724891   50493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.94 22 <nil> <nil>}
	I1018 09:38:21.724895   50493 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1018 09:38:21.837756   50493 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1018 09:38:21.837813   50493 main.go:141] libmachine: found compatible host: buildroot
	I1018 09:38:21.837818   50493 main.go:141] libmachine: Provisioning with buildroot...
	I1018 09:38:21.837824   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetMachineName
	I1018 09:38:21.838107   50493 buildroot.go:166] provisioning hostname "cert-options-586276"
	I1018 09:38:21.838126   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetMachineName
	I1018 09:38:21.838330   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHHostname
	I1018 09:38:21.841229   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has defined MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:21.841658   50493 main.go:141] libmachine: (cert-options-586276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b2:c5", ip: ""} in network mk-cert-options-586276: {Iface:virbr2 ExpiryTime:2025-10-18 10:38:17 +0000 UTC Type:0 Mac:52:54:00:a3:b2:c5 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:cert-options-586276 Clientid:01:52:54:00:a3:b2:c5}
	I1018 09:38:21.841674   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has defined IP address 192.168.50.94 and MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:21.842022   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHPort
	I1018 09:38:21.842246   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHKeyPath
	I1018 09:38:21.842440   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHKeyPath
	I1018 09:38:21.842545   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHUsername
	I1018 09:38:21.842691   50493 main.go:141] libmachine: Using SSH client type: native
	I1018 09:38:21.843013   50493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.94 22 <nil> <nil>}
	I1018 09:38:21.843024   50493 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-options-586276 && echo "cert-options-586276" | sudo tee /etc/hostname
	I1018 09:38:21.973984   50493 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-options-586276
	
	I1018 09:38:21.974001   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHHostname
	I1018 09:38:21.977402   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has defined MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:21.977804   50493 main.go:141] libmachine: (cert-options-586276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b2:c5", ip: ""} in network mk-cert-options-586276: {Iface:virbr2 ExpiryTime:2025-10-18 10:38:17 +0000 UTC Type:0 Mac:52:54:00:a3:b2:c5 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:cert-options-586276 Clientid:01:52:54:00:a3:b2:c5}
	I1018 09:38:21.977826   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has defined IP address 192.168.50.94 and MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:21.978044   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHPort
	I1018 09:38:21.978241   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHKeyPath
	I1018 09:38:21.978411   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHKeyPath
	I1018 09:38:21.978525   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHUsername
	I1018 09:38:21.978641   50493 main.go:141] libmachine: Using SSH client type: native
	I1018 09:38:21.978852   50493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.94 22 <nil> <nil>}
	I1018 09:38:21.978863   50493 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-options-586276' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-options-586276/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-options-586276' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:38:22.100581   50493 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:38:22.100596   50493 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21767-6063/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-6063/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-6063/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-6063/.minikube}
	I1018 09:38:22.100628   50493 buildroot.go:174] setting up certificates
	I1018 09:38:22.100646   50493 provision.go:84] configureAuth start
	I1018 09:38:22.100654   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetMachineName
	I1018 09:38:22.101013   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetIP
	I1018 09:38:22.104392   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has defined MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:22.104843   50493 main.go:141] libmachine: (cert-options-586276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b2:c5", ip: ""} in network mk-cert-options-586276: {Iface:virbr2 ExpiryTime:2025-10-18 10:38:17 +0000 UTC Type:0 Mac:52:54:00:a3:b2:c5 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:cert-options-586276 Clientid:01:52:54:00:a3:b2:c5}
	I1018 09:38:22.104861   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has defined IP address 192.168.50.94 and MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:22.105125   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHHostname
	I1018 09:38:22.107808   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has defined MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:22.108250   50493 main.go:141] libmachine: (cert-options-586276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b2:c5", ip: ""} in network mk-cert-options-586276: {Iface:virbr2 ExpiryTime:2025-10-18 10:38:17 +0000 UTC Type:0 Mac:52:54:00:a3:b2:c5 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:cert-options-586276 Clientid:01:52:54:00:a3:b2:c5}
	I1018 09:38:22.108275   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has defined IP address 192.168.50.94 and MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:22.108484   50493 provision.go:143] copyHostCerts
	I1018 09:38:22.108539   50493 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-6063/.minikube/ca.pem, removing ...
	I1018 09:38:22.108552   50493 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-6063/.minikube/ca.pem
	I1018 09:38:22.108623   50493 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-6063/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-6063/.minikube/ca.pem (1078 bytes)
	I1018 09:38:22.108725   50493 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-6063/.minikube/cert.pem, removing ...
	I1018 09:38:22.108729   50493 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-6063/.minikube/cert.pem
	I1018 09:38:22.108764   50493 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-6063/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-6063/.minikube/cert.pem (1123 bytes)
	I1018 09:38:22.108842   50493 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-6063/.minikube/key.pem, removing ...
	I1018 09:38:22.108846   50493 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-6063/.minikube/key.pem
	I1018 09:38:22.108867   50493 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-6063/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-6063/.minikube/key.pem (1675 bytes)
	I1018 09:38:22.108949   50493 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-6063/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-6063/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-6063/.minikube/certs/ca-key.pem org=jenkins.cert-options-586276 san=[127.0.0.1 192.168.50.94 cert-options-586276 localhost minikube]
	I1018 09:38:22.273896   50493 provision.go:177] copyRemoteCerts
	I1018 09:38:22.273958   50493 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:38:22.273980   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHHostname
	I1018 09:38:22.277244   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has defined MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:22.277668   50493 main.go:141] libmachine: (cert-options-586276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b2:c5", ip: ""} in network mk-cert-options-586276: {Iface:virbr2 ExpiryTime:2025-10-18 10:38:17 +0000 UTC Type:0 Mac:52:54:00:a3:b2:c5 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:cert-options-586276 Clientid:01:52:54:00:a3:b2:c5}
	I1018 09:38:22.277689   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has defined IP address 192.168.50.94 and MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:22.277915   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHPort
	I1018 09:38:22.278112   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHKeyPath
	I1018 09:38:22.278245   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHUsername
	I1018 09:38:22.278395   50493 sshutil.go:53] new ssh client: &{IP:192.168.50.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/cert-options-586276/id_rsa Username:docker}
	I1018 09:38:22.366168   50493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:38:22.398887   50493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1018 09:38:22.431405   50493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 09:38:22.465604   50493 provision.go:87] duration metric: took 364.947343ms to configureAuth
	I1018 09:38:22.465622   50493 buildroot.go:189] setting minikube options for container-runtime
	I1018 09:38:22.465830   50493 config.go:182] Loaded profile config "cert-options-586276": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:38:22.465903   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHHostname
	I1018 09:38:22.468915   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has defined MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:22.469340   50493 main.go:141] libmachine: (cert-options-586276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b2:c5", ip: ""} in network mk-cert-options-586276: {Iface:virbr2 ExpiryTime:2025-10-18 10:38:17 +0000 UTC Type:0 Mac:52:54:00:a3:b2:c5 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:cert-options-586276 Clientid:01:52:54:00:a3:b2:c5}
	I1018 09:38:22.469383   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has defined IP address 192.168.50.94 and MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:22.469620   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHPort
	I1018 09:38:22.469843   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHKeyPath
	I1018 09:38:22.470052   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHKeyPath
	I1018 09:38:22.470234   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHUsername
	I1018 09:38:22.470377   50493 main.go:141] libmachine: Using SSH client type: native
	I1018 09:38:22.470587   50493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.94 22 <nil> <nil>}
	I1018 09:38:22.470599   50493 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:38:22.731732   50493 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:38:22.731746   50493 main.go:141] libmachine: Checking connection to Docker...
	I1018 09:38:22.731753   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetURL
	I1018 09:38:22.733342   50493 main.go:141] libmachine: (cert-options-586276) DBG | using libvirt version 8000000
	I1018 09:38:22.736318   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has defined MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:22.736715   50493 main.go:141] libmachine: (cert-options-586276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b2:c5", ip: ""} in network mk-cert-options-586276: {Iface:virbr2 ExpiryTime:2025-10-18 10:38:17 +0000 UTC Type:0 Mac:52:54:00:a3:b2:c5 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:cert-options-586276 Clientid:01:52:54:00:a3:b2:c5}
	I1018 09:38:22.736740   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has defined IP address 192.168.50.94 and MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:22.736946   50493 main.go:141] libmachine: Docker is up and running!
	I1018 09:38:22.736957   50493 main.go:141] libmachine: Reticulating splines...
	I1018 09:38:22.736964   50493 client.go:171] duration metric: took 21.936766066s to LocalClient.Create
	I1018 09:38:22.736991   50493 start.go:167] duration metric: took 21.936832107s to libmachine.API.Create "cert-options-586276"
	I1018 09:38:22.736998   50493 start.go:293] postStartSetup for "cert-options-586276" (driver="kvm2")
	I1018 09:38:22.737007   50493 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:38:22.737020   50493 main.go:141] libmachine: (cert-options-586276) Calling .DriverName
	I1018 09:38:22.737299   50493 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:38:22.737322   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHHostname
	I1018 09:38:22.739935   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has defined MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:22.740439   50493 main.go:141] libmachine: (cert-options-586276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b2:c5", ip: ""} in network mk-cert-options-586276: {Iface:virbr2 ExpiryTime:2025-10-18 10:38:17 +0000 UTC Type:0 Mac:52:54:00:a3:b2:c5 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:cert-options-586276 Clientid:01:52:54:00:a3:b2:c5}
	I1018 09:38:22.740462   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has defined IP address 192.168.50.94 and MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:22.740611   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHPort
	I1018 09:38:22.740792   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHKeyPath
	I1018 09:38:22.740996   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHUsername
	I1018 09:38:22.741171   50493 sshutil.go:53] new ssh client: &{IP:192.168.50.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/cert-options-586276/id_rsa Username:docker}
	I1018 09:38:22.828263   50493 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:38:22.833256   50493 info.go:137] Remote host: Buildroot 2025.02
	I1018 09:38:22.833301   50493 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-6063/.minikube/addons for local assets ...
	I1018 09:38:22.833372   50493 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-6063/.minikube/files for local assets ...
	I1018 09:38:22.833450   50493 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-6063/.minikube/files/etc/ssl/certs/99562.pem -> 99562.pem in /etc/ssl/certs
	I1018 09:38:22.833557   50493 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:38:22.846109   50493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/files/etc/ssl/certs/99562.pem --> /etc/ssl/certs/99562.pem (1708 bytes)
	I1018 09:38:22.995792   51011 start.go:364] duration metric: took 6.162517007s to acquireMachinesLock for "old-k8s-version-874951"
	I1018 09:38:22.995872   51011 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-874951 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-874951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:38:22.995975   51011 start.go:125] createHost starting for "" (driver="kvm2")
	I1018 09:38:21.599190   50299 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 fc012585626724b3556d7a6649f793af6fb0753ed46c789bce7cb3ea391413bb 4129f2037f9f283b2dcc43160d4a1cceb6aa28b44ee94c719f0b35e305d9b153 201cd2c1e158f4cd558f4e00ed2a29b45d3ac5d6699b526aa3173435fc58e0e7 02ee425859eef16741c99f5436243e9681dfb3da34b4721472f7361d29ba47dc 8a58eff517402787ff5f9c7a733a3863ce25d66a53c0f5e8fd09bd24cfa911d5 b2a9247b81b38d66905d17b1c7125eaaca22be0c11e84329c01545bf8a63d3f7 a7c1e6744ecf6491aa07a3f4fe3bacfbcf9a3bf2ea05b19597103b52582c9cc8 92f511266e63ef40b9488d80b5ebd3fe0c035e641634ce8baad15890e2e55b24 cb79c1b4da783f7f285f67013fe29663bdaa35310af35652f81a5fedb1efdaf0 a5280ebc74b43a105559a2e112f577fd5d14088a8bbd269813a39a36f3d490d2 3bb69a3ac84bc3f90569e6445c8862e4317994bfaead9574469288f1091a7739: (11.278036254s)
	W1018 09:38:21.599278   50299 kubeadm.go:648] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 fc012585626724b3556d7a6649f793af6fb0753ed46c789bce7cb3ea391413bb 4129f2037f9f283b2dcc43160d4a1cceb6aa28b44ee94c719f0b35e305d9b153 201cd2c1e158f4cd558f4e00ed2a29b45d3ac5d6699b526aa3173435fc58e0e7 02ee425859eef16741c99f5436243e9681dfb3da34b4721472f7361d29ba47dc 8a58eff517402787ff5f9c7a733a3863ce25d66a53c0f5e8fd09bd24cfa911d5 b2a9247b81b38d66905d17b1c7125eaaca22be0c11e84329c01545bf8a63d3f7 a7c1e6744ecf6491aa07a3f4fe3bacfbcf9a3bf2ea05b19597103b52582c9cc8 92f511266e63ef40b9488d80b5ebd3fe0c035e641634ce8baad15890e2e55b24 cb79c1b4da783f7f285f67013fe29663bdaa35310af35652f81a5fedb1efdaf0 a5280ebc74b43a105559a2e112f577fd5d14088a8bbd269813a39a36f3d490d2 3bb69a3ac84bc3f90569e6445c8862e4317994bfaead9574469288f1091a7739: Process exited with status 1
	stdout:
	fc012585626724b3556d7a6649f793af6fb0753ed46c789bce7cb3ea391413bb
	4129f2037f9f283b2dcc43160d4a1cceb6aa28b44ee94c719f0b35e305d9b153
	201cd2c1e158f4cd558f4e00ed2a29b45d3ac5d6699b526aa3173435fc58e0e7
	02ee425859eef16741c99f5436243e9681dfb3da34b4721472f7361d29ba47dc
	8a58eff517402787ff5f9c7a733a3863ce25d66a53c0f5e8fd09bd24cfa911d5
	b2a9247b81b38d66905d17b1c7125eaaca22be0c11e84329c01545bf8a63d3f7
	
	stderr:
	E1018 09:38:21.592804    3604 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7c1e6744ecf6491aa07a3f4fe3bacfbcf9a3bf2ea05b19597103b52582c9cc8\": container with ID starting with a7c1e6744ecf6491aa07a3f4fe3bacfbcf9a3bf2ea05b19597103b52582c9cc8 not found: ID does not exist" containerID="a7c1e6744ecf6491aa07a3f4fe3bacfbcf9a3bf2ea05b19597103b52582c9cc8"
	time="2025-10-18T09:38:21Z" level=fatal msg="stopping the container \"a7c1e6744ecf6491aa07a3f4fe3bacfbcf9a3bf2ea05b19597103b52582c9cc8\": rpc error: code = NotFound desc = could not find container \"a7c1e6744ecf6491aa07a3f4fe3bacfbcf9a3bf2ea05b19597103b52582c9cc8\": container with ID starting with a7c1e6744ecf6491aa07a3f4fe3bacfbcf9a3bf2ea05b19597103b52582c9cc8 not found: ID does not exist"
	I1018 09:38:21.599377   50299 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1018 09:38:21.643610   50299 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 09:38:21.661031   50299 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct 18 09:36 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5637 Oct 18 09:36 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1953 Oct 18 09:36 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5589 Oct 18 09:36 /etc/kubernetes/scheduler.conf
	
	I1018 09:38:21.661117   50299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 09:38:21.674096   50299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 09:38:21.687099   50299 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:38:21.687169   50299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 09:38:21.699709   50299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 09:38:21.716111   50299 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:38:21.716188   50299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 09:38:21.732769   50299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 09:38:21.746533   50299 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:38:21.746599   50299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 09:38:21.759898   50299 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 09:38:21.772744   50299 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 09:38:21.830099   50299 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 09:38:22.878486   50493 start.go:296] duration metric: took 141.474939ms for postStartSetup
	I1018 09:38:22.878529   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetConfigRaw
	I1018 09:38:22.879168   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetIP
	I1018 09:38:22.882189   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has defined MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:22.882493   50493 main.go:141] libmachine: (cert-options-586276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b2:c5", ip: ""} in network mk-cert-options-586276: {Iface:virbr2 ExpiryTime:2025-10-18 10:38:17 +0000 UTC Type:0 Mac:52:54:00:a3:b2:c5 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:cert-options-586276 Clientid:01:52:54:00:a3:b2:c5}
	I1018 09:38:22.882516   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has defined IP address 192.168.50.94 and MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:22.882838   50493 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/cert-options-586276/config.json ...
	I1018 09:38:22.883117   50493 start.go:128] duration metric: took 22.258328169s to createHost
	I1018 09:38:22.883137   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHHostname
	I1018 09:38:22.887164   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has defined MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:22.887603   50493 main.go:141] libmachine: (cert-options-586276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b2:c5", ip: ""} in network mk-cert-options-586276: {Iface:virbr2 ExpiryTime:2025-10-18 10:38:17 +0000 UTC Type:0 Mac:52:54:00:a3:b2:c5 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:cert-options-586276 Clientid:01:52:54:00:a3:b2:c5}
	I1018 09:38:22.887626   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has defined IP address 192.168.50.94 and MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:22.887870   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHPort
	I1018 09:38:22.888120   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHKeyPath
	I1018 09:38:22.888255   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHKeyPath
	I1018 09:38:22.888363   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHUsername
	I1018 09:38:22.888538   50493 main.go:141] libmachine: Using SSH client type: native
	I1018 09:38:22.888738   50493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.94 22 <nil> <nil>}
	I1018 09:38:22.888743   50493 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1018 09:38:22.995641   50493 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760780302.948235753
	
	I1018 09:38:22.995656   50493 fix.go:216] guest clock: 1760780302.948235753
	I1018 09:38:22.995664   50493 fix.go:229] Guest: 2025-10-18 09:38:22.948235753 +0000 UTC Remote: 2025-10-18 09:38:22.883123375 +0000 UTC m=+35.054746871 (delta=65.112378ms)
	I1018 09:38:22.995690   50493 fix.go:200] guest clock delta is within tolerance: 65.112378ms
	I1018 09:38:22.995695   50493 start.go:83] releasing machines lock for "cert-options-586276", held for 22.371064225s
	I1018 09:38:22.995725   50493 main.go:141] libmachine: (cert-options-586276) Calling .DriverName
	I1018 09:38:22.996006   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetIP
	I1018 09:38:22.999594   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has defined MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:23.000012   50493 main.go:141] libmachine: (cert-options-586276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b2:c5", ip: ""} in network mk-cert-options-586276: {Iface:virbr2 ExpiryTime:2025-10-18 10:38:17 +0000 UTC Type:0 Mac:52:54:00:a3:b2:c5 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:cert-options-586276 Clientid:01:52:54:00:a3:b2:c5}
	I1018 09:38:23.000039   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has defined IP address 192.168.50.94 and MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:23.000296   50493 main.go:141] libmachine: (cert-options-586276) Calling .DriverName
	I1018 09:38:23.000883   50493 main.go:141] libmachine: (cert-options-586276) Calling .DriverName
	I1018 09:38:23.001096   50493 main.go:141] libmachine: (cert-options-586276) Calling .DriverName
	I1018 09:38:23.001207   50493 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:38:23.001262   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHHostname
	I1018 09:38:23.001320   50493 ssh_runner.go:195] Run: cat /version.json
	I1018 09:38:23.001339   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHHostname
	I1018 09:38:23.005729   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has defined MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:23.005814   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has defined MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:23.006210   50493 main.go:141] libmachine: (cert-options-586276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b2:c5", ip: ""} in network mk-cert-options-586276: {Iface:virbr2 ExpiryTime:2025-10-18 10:38:17 +0000 UTC Type:0 Mac:52:54:00:a3:b2:c5 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:cert-options-586276 Clientid:01:52:54:00:a3:b2:c5}
	I1018 09:38:23.006236   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has defined IP address 192.168.50.94 and MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:23.006261   50493 main.go:141] libmachine: (cert-options-586276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b2:c5", ip: ""} in network mk-cert-options-586276: {Iface:virbr2 ExpiryTime:2025-10-18 10:38:17 +0000 UTC Type:0 Mac:52:54:00:a3:b2:c5 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:cert-options-586276 Clientid:01:52:54:00:a3:b2:c5}
	I1018 09:38:23.006277   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has defined IP address 192.168.50.94 and MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:23.006513   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHPort
	I1018 09:38:23.006649   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHPort
	I1018 09:38:23.006755   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHKeyPath
	I1018 09:38:23.006826   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHKeyPath
	I1018 09:38:23.006888   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHUsername
	I1018 09:38:23.006956   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHUsername
	I1018 09:38:23.007017   50493 sshutil.go:53] new ssh client: &{IP:192.168.50.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/cert-options-586276/id_rsa Username:docker}
	I1018 09:38:23.007061   50493 sshutil.go:53] new ssh client: &{IP:192.168.50.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/cert-options-586276/id_rsa Username:docker}
	I1018 09:38:23.115583   50493 ssh_runner.go:195] Run: systemctl --version
	I1018 09:38:23.122527   50493 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:38:23.292534   50493 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:38:23.302413   50493 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:38:23.302478   50493 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:38:23.323449   50493 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 09:38:23.323466   50493 start.go:495] detecting cgroup driver to use...
	I1018 09:38:23.323537   50493 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:38:23.346722   50493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:38:23.365618   50493 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:38:23.365678   50493 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:38:23.386260   50493 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:38:23.405111   50493 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:38:23.558293   50493 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:38:23.792124   50493 docker.go:234] disabling docker service ...
	I1018 09:38:23.792175   50493 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:38:23.813906   50493 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:38:23.832823   50493 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:38:24.003815   50493 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:38:24.175235   50493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:38:24.192012   50493 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:38:24.220736   50493 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:38:24.220805   50493 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:38:24.234398   50493 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 09:38:24.234461   50493 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:38:24.249630   50493 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:38:24.263514   50493 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:38:24.277299   50493 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:38:24.293737   50493 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:38:24.308397   50493 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:38:24.336748   50493 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:38:24.354765   50493 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:38:24.368968   50493 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1018 09:38:24.369040   50493 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1018 09:38:24.395580   50493 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:38:24.409992   50493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:38:24.576290   50493 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:38:24.717950   50493 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:38:24.718013   50493 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:38:24.725668   50493 start.go:563] Will wait 60s for crictl version
	I1018 09:38:24.725726   50493 ssh_runner.go:195] Run: which crictl
	I1018 09:38:24.731287   50493 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1018 09:38:24.780401   50493 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1018 09:38:24.780473   50493 ssh_runner.go:195] Run: crio --version
	I1018 09:38:24.815459   50493 ssh_runner.go:195] Run: crio --version
	I1018 09:38:24.850469   50493 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1018 09:38:22.999272   51011 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1018 09:38:22.999494   51011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:38:22.999548   51011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:38:23.016066   51011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44141
	I1018 09:38:23.016551   51011 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:38:23.017184   51011 main.go:141] libmachine: Using API Version  1
	I1018 09:38:23.017210   51011 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:38:23.017593   51011 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:38:23.017785   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetMachineName
	I1018 09:38:23.018041   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .DriverName
	I1018 09:38:23.018258   51011 start.go:159] libmachine.API.Create for "old-k8s-version-874951" (driver="kvm2")
	I1018 09:38:23.018297   51011 client.go:168] LocalClient.Create starting
	I1018 09:38:23.018338   51011 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-6063/.minikube/certs/ca.pem
	I1018 09:38:23.018380   51011 main.go:141] libmachine: Decoding PEM data...
	I1018 09:38:23.018405   51011 main.go:141] libmachine: Parsing certificate...
	I1018 09:38:23.018470   51011 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-6063/.minikube/certs/cert.pem
	I1018 09:38:23.018495   51011 main.go:141] libmachine: Decoding PEM data...
	I1018 09:38:23.018510   51011 main.go:141] libmachine: Parsing certificate...
	I1018 09:38:23.018530   51011 main.go:141] libmachine: Running pre-create checks...
	I1018 09:38:23.018550   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .PreCreateCheck
	I1018 09:38:23.018930   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetConfigRaw
	I1018 09:38:23.019418   51011 main.go:141] libmachine: Creating machine...
	I1018 09:38:23.019436   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .Create
	I1018 09:38:23.019588   51011 main.go:141] libmachine: (old-k8s-version-874951) creating domain...
	I1018 09:38:23.019606   51011 main.go:141] libmachine: (old-k8s-version-874951) creating network...
	I1018 09:38:23.021315   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | found existing default network
	I1018 09:38:23.021481   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | <network connections='3'>
	I1018 09:38:23.021511   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |   <name>default</name>
	I1018 09:38:23.021536   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1018 09:38:23.021553   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |   <forward mode='nat'>
	I1018 09:38:23.021562   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |     <nat>
	I1018 09:38:23.021579   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |       <port start='1024' end='65535'/>
	I1018 09:38:23.021591   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |     </nat>
	I1018 09:38:23.021598   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |   </forward>
	I1018 09:38:23.021611   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1018 09:38:23.021638   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1018 09:38:23.021674   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1018 09:38:23.021700   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |     <dhcp>
	I1018 09:38:23.021716   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1018 09:38:23.021727   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |     </dhcp>
	I1018 09:38:23.021737   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |   </ip>
	I1018 09:38:23.021749   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | </network>
	I1018 09:38:23.021760   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | 
	I1018 09:38:23.022759   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | I1018 09:38:23.022577   51090 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:52:03:49} reservation:<nil>}
	I1018 09:38:23.023633   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | I1018 09:38:23.023546   51090 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:49:92:40} reservation:<nil>}
	I1018 09:38:23.025865   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | I1018 09:38:23.025696   51090 network.go:209] skipping subnet 192.168.61.0/24 that is reserved: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1018 09:38:23.026499   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | I1018 09:38:23.026392   51090 network.go:211] skipping subnet 192.168.72.0/24 that is taken: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.72.1 IfaceMTU:1500 IfaceMAC:52:54:00:16:a6:d0} reservation:<nil>}
	I1018 09:38:23.027684   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | I1018 09:38:23.027571   51090 network.go:206] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001307d0}
	I1018 09:38:23.027736   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | defining private network:
	I1018 09:38:23.027759   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | 
	I1018 09:38:23.027783   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | <network>
	I1018 09:38:23.027816   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |   <name>mk-old-k8s-version-874951</name>
	I1018 09:38:23.027840   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |   <dns enable='no'/>
	I1018 09:38:23.027860   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |   <ip address='192.168.83.1' netmask='255.255.255.0'>
	I1018 09:38:23.027873   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |     <dhcp>
	I1018 09:38:23.027883   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |       <range start='192.168.83.2' end='192.168.83.253'/>
	I1018 09:38:23.027906   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |     </dhcp>
	I1018 09:38:23.027937   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |   </ip>
	I1018 09:38:23.027950   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | </network>
	I1018 09:38:23.027960   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | 
	I1018 09:38:23.038583   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | creating private network mk-old-k8s-version-874951 192.168.83.0/24...
	I1018 09:38:23.127402   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | private network mk-old-k8s-version-874951 192.168.83.0/24 created
	I1018 09:38:23.127642   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | <network>
	I1018 09:38:23.127662   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |   <name>mk-old-k8s-version-874951</name>
	I1018 09:38:23.127675   51011 main.go:141] libmachine: (old-k8s-version-874951) setting up store path in /home/jenkins/minikube-integration/21767-6063/.minikube/machines/old-k8s-version-874951 ...
	I1018 09:38:23.127693   51011 main.go:141] libmachine: (old-k8s-version-874951) building disk image from file:///home/jenkins/minikube-integration/21767-6063/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1018 09:38:23.127710   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |   <uuid>9a8f548b-a387-4ec8-b4be-0367af1783a8</uuid>
	I1018 09:38:23.127740   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I1018 09:38:23.127758   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |   <mac address='52:54:00:b5:70:96'/>
	I1018 09:38:23.127792   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |   <dns enable='no'/>
	I1018 09:38:23.127828   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |   <ip address='192.168.83.1' netmask='255.255.255.0'>
	I1018 09:38:23.127844   51011 main.go:141] libmachine: (old-k8s-version-874951) Downloading /home/jenkins/minikube-integration/21767-6063/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21767-6063/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso...
	I1018 09:38:23.127860   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |     <dhcp>
	I1018 09:38:23.127879   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |       <range start='192.168.83.2' end='192.168.83.253'/>
	I1018 09:38:23.127893   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |     </dhcp>
	I1018 09:38:23.127903   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |   </ip>
	I1018 09:38:23.127935   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | </network>
	I1018 09:38:23.127956   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | 
	I1018 09:38:23.127986   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | I1018 09:38:23.127634   51090 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21767-6063/.minikube
	I1018 09:38:23.387972   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | I1018 09:38:23.387839   51090 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21767-6063/.minikube/machines/old-k8s-version-874951/id_rsa...
	I1018 09:38:23.764128   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | I1018 09:38:23.763969   51090 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21767-6063/.minikube/machines/old-k8s-version-874951/old-k8s-version-874951.rawdisk...
	I1018 09:38:23.764164   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | Writing magic tar header
	I1018 09:38:23.764195   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | Writing SSH key tar header
	I1018 09:38:23.764207   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | I1018 09:38:23.764114   51090 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21767-6063/.minikube/machines/old-k8s-version-874951 ...
	I1018 09:38:23.764232   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21767-6063/.minikube/machines/old-k8s-version-874951
	I1018 09:38:23.764301   51011 main.go:141] libmachine: (old-k8s-version-874951) setting executable bit set on /home/jenkins/minikube-integration/21767-6063/.minikube/machines/old-k8s-version-874951 (perms=drwx------)
	I1018 09:38:23.764337   51011 main.go:141] libmachine: (old-k8s-version-874951) setting executable bit set on /home/jenkins/minikube-integration/21767-6063/.minikube/machines (perms=drwxr-xr-x)
	I1018 09:38:23.764350   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21767-6063/.minikube/machines
	I1018 09:38:23.764368   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21767-6063/.minikube
	I1018 09:38:23.764377   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21767-6063
	I1018 09:38:23.764385   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1018 09:38:23.764395   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | checking permissions on dir: /home/jenkins
	I1018 09:38:23.764407   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | checking permissions on dir: /home
	I1018 09:38:23.764423   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | skipping /home - not owner
	I1018 09:38:23.764437   51011 main.go:141] libmachine: (old-k8s-version-874951) setting executable bit set on /home/jenkins/minikube-integration/21767-6063/.minikube (perms=drwxr-xr-x)
	I1018 09:38:23.764451   51011 main.go:141] libmachine: (old-k8s-version-874951) setting executable bit set on /home/jenkins/minikube-integration/21767-6063 (perms=drwxrwxr-x)
	I1018 09:38:23.764461   51011 main.go:141] libmachine: (old-k8s-version-874951) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1018 09:38:23.764469   51011 main.go:141] libmachine: (old-k8s-version-874951) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1018 09:38:23.764476   51011 main.go:141] libmachine: (old-k8s-version-874951) defining domain...
	I1018 09:38:23.765740   51011 main.go:141] libmachine: (old-k8s-version-874951) defining domain using XML: 
	I1018 09:38:23.765774   51011 main.go:141] libmachine: (old-k8s-version-874951) <domain type='kvm'>
	I1018 09:38:23.765804   51011 main.go:141] libmachine: (old-k8s-version-874951)   <name>old-k8s-version-874951</name>
	I1018 09:38:23.765817   51011 main.go:141] libmachine: (old-k8s-version-874951)   <memory unit='MiB'>3072</memory>
	I1018 09:38:23.765826   51011 main.go:141] libmachine: (old-k8s-version-874951)   <vcpu>2</vcpu>
	I1018 09:38:23.765836   51011 main.go:141] libmachine: (old-k8s-version-874951)   <features>
	I1018 09:38:23.765842   51011 main.go:141] libmachine: (old-k8s-version-874951)     <acpi/>
	I1018 09:38:23.765849   51011 main.go:141] libmachine: (old-k8s-version-874951)     <apic/>
	I1018 09:38:23.765854   51011 main.go:141] libmachine: (old-k8s-version-874951)     <pae/>
	I1018 09:38:23.765862   51011 main.go:141] libmachine: (old-k8s-version-874951)   </features>
	I1018 09:38:23.765889   51011 main.go:141] libmachine: (old-k8s-version-874951)   <cpu mode='host-passthrough'>
	I1018 09:38:23.765913   51011 main.go:141] libmachine: (old-k8s-version-874951)   </cpu>
	I1018 09:38:23.766013   51011 main.go:141] libmachine: (old-k8s-version-874951)   <os>
	I1018 09:38:23.766041   51011 main.go:141] libmachine: (old-k8s-version-874951)     <type>hvm</type>
	I1018 09:38:23.766058   51011 main.go:141] libmachine: (old-k8s-version-874951)     <boot dev='cdrom'/>
	I1018 09:38:23.766069   51011 main.go:141] libmachine: (old-k8s-version-874951)     <boot dev='hd'/>
	I1018 09:38:23.766088   51011 main.go:141] libmachine: (old-k8s-version-874951)     <bootmenu enable='no'/>
	I1018 09:38:23.766106   51011 main.go:141] libmachine: (old-k8s-version-874951)   </os>
	I1018 09:38:23.766151   51011 main.go:141] libmachine: (old-k8s-version-874951)   <devices>
	I1018 09:38:23.766168   51011 main.go:141] libmachine: (old-k8s-version-874951)     <disk type='file' device='cdrom'>
	I1018 09:38:23.766183   51011 main.go:141] libmachine: (old-k8s-version-874951)       <source file='/home/jenkins/minikube-integration/21767-6063/.minikube/machines/old-k8s-version-874951/boot2docker.iso'/>
	I1018 09:38:23.766195   51011 main.go:141] libmachine: (old-k8s-version-874951)       <target dev='hdc' bus='scsi'/>
	I1018 09:38:23.766225   51011 main.go:141] libmachine: (old-k8s-version-874951)       <readonly/>
	I1018 09:38:23.766259   51011 main.go:141] libmachine: (old-k8s-version-874951)     </disk>
	I1018 09:38:23.766284   51011 main.go:141] libmachine: (old-k8s-version-874951)     <disk type='file' device='disk'>
	I1018 09:38:23.766297   51011 main.go:141] libmachine: (old-k8s-version-874951)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1018 09:38:23.766312   51011 main.go:141] libmachine: (old-k8s-version-874951)       <source file='/home/jenkins/minikube-integration/21767-6063/.minikube/machines/old-k8s-version-874951/old-k8s-version-874951.rawdisk'/>
	I1018 09:38:23.766320   51011 main.go:141] libmachine: (old-k8s-version-874951)       <target dev='hda' bus='virtio'/>
	I1018 09:38:23.766328   51011 main.go:141] libmachine: (old-k8s-version-874951)     </disk>
	I1018 09:38:23.766336   51011 main.go:141] libmachine: (old-k8s-version-874951)     <interface type='network'>
	I1018 09:38:23.766346   51011 main.go:141] libmachine: (old-k8s-version-874951)       <source network='mk-old-k8s-version-874951'/>
	I1018 09:38:23.766357   51011 main.go:141] libmachine: (old-k8s-version-874951)       <model type='virtio'/>
	I1018 09:38:23.766370   51011 main.go:141] libmachine: (old-k8s-version-874951)     </interface>
	I1018 09:38:23.766381   51011 main.go:141] libmachine: (old-k8s-version-874951)     <interface type='network'>
	I1018 09:38:23.766391   51011 main.go:141] libmachine: (old-k8s-version-874951)       <source network='default'/>
	I1018 09:38:23.766409   51011 main.go:141] libmachine: (old-k8s-version-874951)       <model type='virtio'/>
	I1018 09:38:23.766420   51011 main.go:141] libmachine: (old-k8s-version-874951)     </interface>
	I1018 09:38:23.766428   51011 main.go:141] libmachine: (old-k8s-version-874951)     <serial type='pty'>
	I1018 09:38:23.766440   51011 main.go:141] libmachine: (old-k8s-version-874951)       <target port='0'/>
	I1018 09:38:23.766449   51011 main.go:141] libmachine: (old-k8s-version-874951)     </serial>
	I1018 09:38:23.766472   51011 main.go:141] libmachine: (old-k8s-version-874951)     <console type='pty'>
	I1018 09:38:23.766491   51011 main.go:141] libmachine: (old-k8s-version-874951)       <target type='serial' port='0'/>
	I1018 09:38:23.766504   51011 main.go:141] libmachine: (old-k8s-version-874951)     </console>
	I1018 09:38:23.766515   51011 main.go:141] libmachine: (old-k8s-version-874951)     <rng model='virtio'>
	I1018 09:38:23.766525   51011 main.go:141] libmachine: (old-k8s-version-874951)       <backend model='random'>/dev/random</backend>
	I1018 09:38:23.766536   51011 main.go:141] libmachine: (old-k8s-version-874951)     </rng>
	I1018 09:38:23.766544   51011 main.go:141] libmachine: (old-k8s-version-874951)   </devices>
	I1018 09:38:23.766562   51011 main.go:141] libmachine: (old-k8s-version-874951) </domain>
	I1018 09:38:23.766576   51011 main.go:141] libmachine: (old-k8s-version-874951) 
	I1018 09:38:23.771606   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined MAC address 52:54:00:35:fe:75 in network default
	I1018 09:38:23.772227   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:23.772255   51011 main.go:141] libmachine: (old-k8s-version-874951) starting domain...
	I1018 09:38:23.772267   51011 main.go:141] libmachine: (old-k8s-version-874951) ensuring networks are active...
	I1018 09:38:23.772915   51011 main.go:141] libmachine: (old-k8s-version-874951) Ensuring network default is active
	I1018 09:38:23.773447   51011 main.go:141] libmachine: (old-k8s-version-874951) Ensuring network mk-old-k8s-version-874951 is active
	I1018 09:38:23.774300   51011 main.go:141] libmachine: (old-k8s-version-874951) getting domain XML...
	I1018 09:38:23.775564   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | starting domain XML:
	I1018 09:38:23.775584   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | <domain type='kvm'>
	I1018 09:38:23.775596   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |   <name>old-k8s-version-874951</name>
	I1018 09:38:23.775605   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |   <uuid>cf8c742b-cc89-4b85-a5f3-6e735150dfc4</uuid>
	I1018 09:38:23.775614   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |   <memory unit='KiB'>3145728</memory>
	I1018 09:38:23.775623   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1018 09:38:23.775634   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |   <vcpu placement='static'>2</vcpu>
	I1018 09:38:23.775643   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |   <os>
	I1018 09:38:23.775653   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1018 09:38:23.775666   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |     <boot dev='cdrom'/>
	I1018 09:38:23.775675   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |     <boot dev='hd'/>
	I1018 09:38:23.775687   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |     <bootmenu enable='no'/>
	I1018 09:38:23.775695   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |   </os>
	I1018 09:38:23.775702   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |   <features>
	I1018 09:38:23.775732   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |     <acpi/>
	I1018 09:38:23.775745   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |     <apic/>
	I1018 09:38:23.775753   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |     <pae/>
	I1018 09:38:23.775768   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |   </features>
	I1018 09:38:23.775782   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1018 09:38:23.775789   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |   <clock offset='utc'/>
	I1018 09:38:23.775812   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |   <on_poweroff>destroy</on_poweroff>
	I1018 09:38:23.775824   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |   <on_reboot>restart</on_reboot>
	I1018 09:38:23.775836   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |   <on_crash>destroy</on_crash>
	I1018 09:38:23.775846   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |   <devices>
	I1018 09:38:23.775855   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1018 09:38:23.775870   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |     <disk type='file' device='cdrom'>
	I1018 09:38:23.775883   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |       <driver name='qemu' type='raw'/>
	I1018 09:38:23.775895   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |       <source file='/home/jenkins/minikube-integration/21767-6063/.minikube/machines/old-k8s-version-874951/boot2docker.iso'/>
	I1018 09:38:23.775908   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |       <target dev='hdc' bus='scsi'/>
	I1018 09:38:23.775916   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |       <readonly/>
	I1018 09:38:23.775976   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1018 09:38:23.775996   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |     </disk>
	I1018 09:38:23.776020   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |     <disk type='file' device='disk'>
	I1018 09:38:23.776038   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1018 09:38:23.776054   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |       <source file='/home/jenkins/minikube-integration/21767-6063/.minikube/machines/old-k8s-version-874951/old-k8s-version-874951.rawdisk'/>
	I1018 09:38:23.776063   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |       <target dev='hda' bus='virtio'/>
	I1018 09:38:23.776073   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1018 09:38:23.776081   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |     </disk>
	I1018 09:38:23.776091   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1018 09:38:23.776104   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1018 09:38:23.776144   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |     </controller>
	I1018 09:38:23.776168   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1018 09:38:23.776181   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1018 09:38:23.776196   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1018 09:38:23.776218   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |     </controller>
	I1018 09:38:23.776231   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |     <interface type='network'>
	I1018 09:38:23.776246   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |       <mac address='52:54:00:e6:82:d2'/>
	I1018 09:38:23.776264   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |       <source network='mk-old-k8s-version-874951'/>
	I1018 09:38:23.776277   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |       <model type='virtio'/>
	I1018 09:38:23.776289   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1018 09:38:23.776300   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |     </interface>
	I1018 09:38:23.776308   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |     <interface type='network'>
	I1018 09:38:23.776320   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |       <mac address='52:54:00:35:fe:75'/>
	I1018 09:38:23.776328   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |       <source network='default'/>
	I1018 09:38:23.776353   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |       <model type='virtio'/>
	I1018 09:38:23.776375   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1018 09:38:23.776389   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |     </interface>
	I1018 09:38:23.776400   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |     <serial type='pty'>
	I1018 09:38:23.776413   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |       <target type='isa-serial' port='0'>
	I1018 09:38:23.776426   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |         <model name='isa-serial'/>
	I1018 09:38:23.776434   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |       </target>
	I1018 09:38:23.776444   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |     </serial>
	I1018 09:38:23.776453   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |     <console type='pty'>
	I1018 09:38:23.776464   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |       <target type='serial' port='0'/>
	I1018 09:38:23.776472   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |     </console>
	I1018 09:38:23.776483   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |     <input type='mouse' bus='ps2'/>
	I1018 09:38:23.776503   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |     <input type='keyboard' bus='ps2'/>
	I1018 09:38:23.776515   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |     <audio id='1' type='none'/>
	I1018 09:38:23.776527   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |     <memballoon model='virtio'>
	I1018 09:38:23.776550   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1018 09:38:23.776561   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |     </memballoon>
	I1018 09:38:23.776572   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |     <rng model='virtio'>
	I1018 09:38:23.776584   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |       <backend model='random'>/dev/random</backend>
	I1018 09:38:23.776596   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1018 09:38:23.776607   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |     </rng>
	I1018 09:38:23.776617   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG |   </devices>
	I1018 09:38:23.776628   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | </domain>
	I1018 09:38:23.776637   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | 
	I1018 09:38:25.379538   51011 main.go:141] libmachine: (old-k8s-version-874951) waiting for domain to start...
	I1018 09:38:25.381857   51011 main.go:141] libmachine: (old-k8s-version-874951) domain is now running
	I1018 09:38:25.381893   51011 main.go:141] libmachine: (old-k8s-version-874951) waiting for IP...
	I1018 09:38:25.383067   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:25.384000   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | no network interface addresses found for domain old-k8s-version-874951 (source=lease)
	I1018 09:38:25.384056   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | trying to list again with source=arp
	I1018 09:38:25.384485   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | unable to find current IP address of domain old-k8s-version-874951 in network mk-old-k8s-version-874951 (interfaces detected: [])
	I1018 09:38:25.384656   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | I1018 09:38:25.384498   51090 retry.go:31] will retry after 196.238692ms: waiting for domain to come up
	I1018 09:38:25.583754   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:25.584675   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | no network interface addresses found for domain old-k8s-version-874951 (source=lease)
	I1018 09:38:25.584698   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | trying to list again with source=arp
	I1018 09:38:25.585419   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | unable to find current IP address of domain old-k8s-version-874951 in network mk-old-k8s-version-874951 (interfaces detected: [])
	I1018 09:38:25.585473   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | I1018 09:38:25.585344   51090 retry.go:31] will retry after 240.137837ms: waiting for domain to come up
	I1018 09:38:25.827428   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:25.828652   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | no network interface addresses found for domain old-k8s-version-874951 (source=lease)
	I1018 09:38:25.828743   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | trying to list again with source=arp
	I1018 09:38:25.829011   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | unable to find current IP address of domain old-k8s-version-874951 in network mk-old-k8s-version-874951 (interfaces detected: [])
	I1018 09:38:25.829274   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | I1018 09:38:25.829197   51090 retry.go:31] will retry after 448.88298ms: waiting for domain to come up
	I1018 09:38:26.280383   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:26.281232   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | no network interface addresses found for domain old-k8s-version-874951 (source=lease)
	I1018 09:38:26.281277   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | trying to list again with source=arp
	I1018 09:38:26.281744   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | unable to find current IP address of domain old-k8s-version-874951 in network mk-old-k8s-version-874951 (interfaces detected: [])
	I1018 09:38:26.281770   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | I1018 09:38:26.281634   51090 retry.go:31] will retry after 444.78865ms: waiting for domain to come up
	I1018 09:38:26.728708   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:26.729460   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | no network interface addresses found for domain old-k8s-version-874951 (source=lease)
	I1018 09:38:26.729482   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | trying to list again with source=arp
	I1018 09:38:26.729975   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | unable to find current IP address of domain old-k8s-version-874951 in network mk-old-k8s-version-874951 (interfaces detected: [])
	I1018 09:38:26.730008   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | I1018 09:38:26.729890   51090 retry.go:31] will retry after 563.993813ms: waiting for domain to come up
	I1018 09:38:24.852095   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetIP
	I1018 09:38:24.856465   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has defined MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:24.857009   50493 main.go:141] libmachine: (cert-options-586276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b2:c5", ip: ""} in network mk-cert-options-586276: {Iface:virbr2 ExpiryTime:2025-10-18 10:38:17 +0000 UTC Type:0 Mac:52:54:00:a3:b2:c5 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:cert-options-586276 Clientid:01:52:54:00:a3:b2:c5}
	I1018 09:38:24.857035   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has defined IP address 192.168.50.94 and MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:24.857338   50493 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1018 09:38:24.862601   50493 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:38:24.883007   50493 kubeadm.go:883] updating cluster {Name:cert-options-586276 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.34.1 ClusterName:cert-options-586276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.94 Port:8555 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:38:24.883130   50493 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:38:24.883191   50493 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:38:24.923768   50493 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1018 09:38:24.923834   50493 ssh_runner.go:195] Run: which lz4
	I1018 09:38:24.928373   50493 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1018 09:38:24.933834   50493 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1018 09:38:24.933861   50493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1018 09:38:26.631759   50493 crio.go:462] duration metric: took 1.703440991s to copy over tarball
	I1018 09:38:26.631833   50493 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1018 09:38:24.238058   50299 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.407918897s)
	I1018 09:38:24.238138   50299 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1018 09:38:24.557548   50299 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 09:38:24.639808   50299 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1018 09:38:24.780704   50299 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:38:24.780796   50299 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:38:25.281713   50299 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:38:25.781845   50299 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:38:25.832222   50299 api_server.go:72] duration metric: took 1.051524604s to wait for apiserver process to appear ...
	I1018 09:38:25.832249   50299 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:38:25.832270   50299 api_server.go:253] Checking apiserver healthz at https://192.168.72.16:8443/healthz ...
	I1018 09:38:25.832858   50299 api_server.go:269] stopped: https://192.168.72.16:8443/healthz: Get "https://192.168.72.16:8443/healthz": dial tcp 192.168.72.16:8443: connect: connection refused
	I1018 09:38:26.333107   50299 api_server.go:253] Checking apiserver healthz at https://192.168.72.16:8443/healthz ...
	I1018 09:38:28.713035   50299 api_server.go:279] https://192.168.72.16:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 09:38:28.713068   50299 api_server.go:103] status: https://192.168.72.16:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 09:38:28.713087   50299 api_server.go:253] Checking apiserver healthz at https://192.168.72.16:8443/healthz ...
	I1018 09:38:28.766785   50299 api_server.go:279] https://192.168.72.16:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:38:28.766830   50299 api_server.go:103] status: https://192.168.72.16:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:38:27.296001   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:27.296742   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | no network interface addresses found for domain old-k8s-version-874951 (source=lease)
	I1018 09:38:27.296774   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | trying to list again with source=arp
	I1018 09:38:27.297191   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | unable to find current IP address of domain old-k8s-version-874951 in network mk-old-k8s-version-874951 (interfaces detected: [])
	I1018 09:38:27.297226   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | I1018 09:38:27.297164   51090 retry.go:31] will retry after 938.57741ms: waiting for domain to come up
	I1018 09:38:28.238275   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:28.238997   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | no network interface addresses found for domain old-k8s-version-874951 (source=lease)
	I1018 09:38:28.239027   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | trying to list again with source=arp
	I1018 09:38:28.239498   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | unable to find current IP address of domain old-k8s-version-874951 in network mk-old-k8s-version-874951 (interfaces detected: [])
	I1018 09:38:28.239529   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | I1018 09:38:28.239454   51090 retry.go:31] will retry after 992.50758ms: waiting for domain to come up
	I1018 09:38:29.233648   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:29.234314   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | no network interface addresses found for domain old-k8s-version-874951 (source=lease)
	I1018 09:38:29.234337   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | trying to list again with source=arp
	I1018 09:38:29.234629   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | unable to find current IP address of domain old-k8s-version-874951 in network mk-old-k8s-version-874951 (interfaces detected: [])
	I1018 09:38:29.234712   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | I1018 09:38:29.234632   51090 retry.go:31] will retry after 1.300627995s: waiting for domain to come up
	I1018 09:38:30.537223   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:30.538095   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | no network interface addresses found for domain old-k8s-version-874951 (source=lease)
	I1018 09:38:30.538126   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | trying to list again with source=arp
	I1018 09:38:30.538398   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | unable to find current IP address of domain old-k8s-version-874951 in network mk-old-k8s-version-874951 (interfaces detected: [])
	I1018 09:38:30.538487   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | I1018 09:38:30.538409   51090 retry.go:31] will retry after 1.467400896s: waiting for domain to come up
	I1018 09:38:28.833004   50299 api_server.go:253] Checking apiserver healthz at https://192.168.72.16:8443/healthz ...
	I1018 09:38:28.841830   50299 api_server.go:279] https://192.168.72.16:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:38:28.841866   50299 api_server.go:103] status: https://192.168.72.16:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:38:29.333100   50299 api_server.go:253] Checking apiserver healthz at https://192.168.72.16:8443/healthz ...
	I1018 09:38:29.338394   50299 api_server.go:279] https://192.168.72.16:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:38:29.338446   50299 api_server.go:103] status: https://192.168.72.16:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:38:29.833133   50299 api_server.go:253] Checking apiserver healthz at https://192.168.72.16:8443/healthz ...
	I1018 09:38:30.987255   50299 api_server.go:279] https://192.168.72.16:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:38:30.987296   50299 api_server.go:103] status: https://192.168.72.16:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:38:30.987318   50299 api_server.go:253] Checking apiserver healthz at https://192.168.72.16:8443/healthz ...
	I1018 09:38:30.994077   50299 api_server.go:279] https://192.168.72.16:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:38:30.994109   50299 api_server.go:103] status: https://192.168.72.16:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:38:31.332455   50299 api_server.go:253] Checking apiserver healthz at https://192.168.72.16:8443/healthz ...
	I1018 09:38:31.339259   50299 api_server.go:279] https://192.168.72.16:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:38:31.339287   50299 api_server.go:103] status: https://192.168.72.16:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:38:31.833011   50299 api_server.go:253] Checking apiserver healthz at https://192.168.72.16:8443/healthz ...
	I1018 09:38:31.838499   50299 api_server.go:279] https://192.168.72.16:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:38:31.838534   50299 api_server.go:103] status: https://192.168.72.16:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:38:32.333119   50299 api_server.go:253] Checking apiserver healthz at https://192.168.72.16:8443/healthz ...
	I1018 09:38:32.338659   50299 api_server.go:279] https://192.168.72.16:8443/healthz returned 200:
	ok
	I1018 09:38:32.347464   50299 api_server.go:141] control plane version: v1.34.1
	I1018 09:38:32.347503   50299 api_server.go:131] duration metric: took 6.515246639s to wait for apiserver health ...
	I1018 09:38:32.347513   50299 cni.go:84] Creating CNI manager for ""
	I1018 09:38:32.347522   50299 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 09:38:32.349479   50299 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1018 09:38:32.350784   50299 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1018 09:38:32.366225   50299 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1018 09:38:32.391363   50299 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:38:32.397893   50299 system_pods.go:59] 6 kube-system pods found
	I1018 09:38:32.397953   50299 system_pods.go:61] "coredns-66bc5c9577-gkqrn" [80039a0f-d663-4568-85a8-f35ea7394b79] Running
	I1018 09:38:32.397974   50299 system_pods.go:61] "etcd-pause-251981" [25e83f85-43a4-429d-964f-b4f7ad608035] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:38:32.397983   50299 system_pods.go:61] "kube-apiserver-pause-251981" [9eb65d8c-370f-4632-ab3f-8210dd4d618d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:38:32.398000   50299 system_pods.go:61] "kube-controller-manager-pause-251981" [2e054d57-0eb7-40d9-a60e-dfdb0750b7cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:38:32.398016   50299 system_pods.go:61] "kube-proxy-hh69n" [91ff45f3-e63f-4bc3-8bf8-d805a6f89864] Running
	I1018 09:38:32.398026   50299 system_pods.go:61] "kube-scheduler-pause-251981" [96ff66b7-5eab-409e-bb20-9fa19d294edf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:38:32.398037   50299 system_pods.go:74] duration metric: took 6.640807ms to wait for pod list to return data ...
	I1018 09:38:32.398047   50299 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:38:32.403004   50299 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1018 09:38:32.403055   50299 node_conditions.go:123] node cpu capacity is 2
	I1018 09:38:32.403094   50299 node_conditions.go:105] duration metric: took 5.041112ms to run NodePressure ...
	I1018 09:38:32.403185   50299 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 09:38:32.679292   50299 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1018 09:38:32.683186   50299 kubeadm.go:743] kubelet initialised
	I1018 09:38:32.683211   50299 kubeadm.go:744] duration metric: took 3.890834ms waiting for restarted kubelet to initialise ...
	I1018 09:38:32.683232   50299 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 09:38:32.702525   50299 ops.go:34] apiserver oom_adj: -16
	I1018 09:38:32.702551   50299 kubeadm.go:601] duration metric: took 22.558930361s to restartPrimaryControlPlane
	I1018 09:38:32.702560   50299 kubeadm.go:402] duration metric: took 22.836761308s to StartCluster
	I1018 09:38:32.702577   50299 settings.go:142] acquiring lock: {Name:mk5c51ba919dd454ddb697f518b92637a3560487 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:38:32.702652   50299 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-6063/kubeconfig
	I1018 09:38:32.703891   50299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-6063/kubeconfig: {Name:mkb340db398364bcc27d468da7444ccfad7b82c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:38:32.704201   50299 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.16 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:38:32.704276   50299 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:38:32.704471   50299 config.go:182] Loaded profile config "pause-251981": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:38:32.706001   50299 out.go:179] * Verifying Kubernetes components...
	I1018 09:38:32.705993   50299 out.go:179] * Enabled addons: 
	I1018 09:38:28.678375   50493 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.046510722s)
	I1018 09:38:28.678401   50493 crio.go:469] duration metric: took 2.046616058s to extract the tarball
	I1018 09:38:28.678410   50493 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1018 09:38:28.725329   50493 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:38:28.781254   50493 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:38:28.781271   50493 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:38:28.781279   50493 kubeadm.go:934] updating node { 192.168.50.94 8555 v1.34.1 crio true true} ...
	I1018 09:38:28.781412   50493 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-options-586276 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.94
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:cert-options-586276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:38:28.781505   50493 ssh_runner.go:195] Run: crio config
	I1018 09:38:28.835339   50493 cni.go:84] Creating CNI manager for ""
	I1018 09:38:28.835361   50493 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 09:38:28.835382   50493 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:38:28.835404   50493 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.94 APIServerPort:8555 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-options-586276 NodeName:cert-options-586276 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.94"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.94 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:38:28.835526   50493 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.94
	  bindPort: 8555
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-options-586276"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.94"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.94"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8555
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:38:28.835585   50493 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:38:28.850837   50493 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:38:28.850910   50493 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:38:28.865037   50493 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1018 09:38:28.892732   50493 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:38:28.917104   50493 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1018 09:38:28.941547   50493 ssh_runner.go:195] Run: grep 192.168.50.94	control-plane.minikube.internal$ /etc/hosts
	I1018 09:38:28.946043   50493 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.94	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:38:28.962831   50493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:38:29.123905   50493 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:38:29.146419   50493 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/cert-options-586276 for IP: 192.168.50.94
	I1018 09:38:29.146445   50493 certs.go:195] generating shared ca certs ...
	I1018 09:38:29.146464   50493 certs.go:227] acquiring lock for ca certs: {Name:mk72b8eadb27773dc6399bddc4b95ee0664cbf67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:38:29.146698   50493 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-6063/.minikube/ca.key
	I1018 09:38:29.146745   50493 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-6063/.minikube/proxy-client-ca.key
	I1018 09:38:29.146753   50493 certs.go:257] generating profile certs ...
	I1018 09:38:29.146825   50493 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/cert-options-586276/client.key
	I1018 09:38:29.146845   50493 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/cert-options-586276/client.crt with IP's: []
	I1018 09:38:29.268375   50493 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/cert-options-586276/client.crt ...
	I1018 09:38:29.268392   50493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/cert-options-586276/client.crt: {Name:mk539dfc23126e08e13da97fb69705039625f2b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:38:29.268570   50493 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/cert-options-586276/client.key ...
	I1018 09:38:29.268576   50493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/cert-options-586276/client.key: {Name:mk130e47a97e68abd71e8d510a87c20a0f125c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:38:29.268654   50493 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/cert-options-586276/apiserver.key.7ab3ac8e
	I1018 09:38:29.268663   50493 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/cert-options-586276/apiserver.crt.7ab3ac8e with IP's: [127.0.0.1 192.168.15.15 10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.94]
	I1018 09:38:29.291713   50493 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/cert-options-586276/apiserver.crt.7ab3ac8e ...
	I1018 09:38:29.291729   50493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/cert-options-586276/apiserver.crt.7ab3ac8e: {Name:mkf19129b4201f89b2cd3a94bd389e1de13d9b76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:38:29.291890   50493 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/cert-options-586276/apiserver.key.7ab3ac8e ...
	I1018 09:38:29.291899   50493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/cert-options-586276/apiserver.key.7ab3ac8e: {Name:mkb49a2918f44703a0efe7a3b67eb6150d9936bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:38:29.291985   50493 certs.go:382] copying /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/cert-options-586276/apiserver.crt.7ab3ac8e -> /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/cert-options-586276/apiserver.crt
	I1018 09:38:29.292082   50493 certs.go:386] copying /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/cert-options-586276/apiserver.key.7ab3ac8e -> /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/cert-options-586276/apiserver.key
	I1018 09:38:29.292138   50493 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/cert-options-586276/proxy-client.key
	I1018 09:38:29.292151   50493 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/cert-options-586276/proxy-client.crt with IP's: []
	I1018 09:38:29.667548   50493 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/cert-options-586276/proxy-client.crt ...
	I1018 09:38:29.667564   50493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/cert-options-586276/proxy-client.crt: {Name:mkc549c966a60ca976ea622d0d0490e140ac963b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:38:29.667728   50493 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/cert-options-586276/proxy-client.key ...
	I1018 09:38:29.667737   50493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/cert-options-586276/proxy-client.key: {Name:mkd26785b4afbe9ed58b8c9aa17933d11246b51d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:38:29.667914   50493 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-6063/.minikube/certs/9956.pem (1338 bytes)
	W1018 09:38:29.667960   50493 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-6063/.minikube/certs/9956_empty.pem, impossibly tiny 0 bytes
	I1018 09:38:29.667969   50493 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-6063/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:38:29.667989   50493 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-6063/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:38:29.668007   50493 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-6063/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:38:29.668026   50493 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-6063/.minikube/certs/key.pem (1675 bytes)
	I1018 09:38:29.668060   50493 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-6063/.minikube/files/etc/ssl/certs/99562.pem (1708 bytes)
	I1018 09:38:29.668599   50493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:38:29.710961   50493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 09:38:29.753425   50493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:38:29.788582   50493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:38:29.826063   50493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/cert-options-586276/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1480 bytes)
	I1018 09:38:29.884868   50493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/cert-options-586276/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 09:38:29.975286   50493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/cert-options-586276/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:38:30.037226   50493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/cert-options-586276/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 09:38:30.075492   50493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:38:30.107998   50493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/certs/9956.pem --> /usr/share/ca-certificates/9956.pem (1338 bytes)
	I1018 09:38:30.142184   50493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/files/etc/ssl/certs/99562.pem --> /usr/share/ca-certificates/99562.pem (1708 bytes)
	I1018 09:38:30.177387   50493 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:38:30.200274   50493 ssh_runner.go:195] Run: openssl version
	I1018 09:38:30.207115   50493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:38:30.222185   50493 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:38:30.229007   50493 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:38:30.229073   50493 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:38:30.237107   50493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:38:30.252901   50493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9956.pem && ln -fs /usr/share/ca-certificates/9956.pem /etc/ssl/certs/9956.pem"
	I1018 09:38:30.268350   50493 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9956.pem
	I1018 09:38:30.274235   50493 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 08:38 /usr/share/ca-certificates/9956.pem
	I1018 09:38:30.274289   50493 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9956.pem
	I1018 09:38:30.284993   50493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9956.pem /etc/ssl/certs/51391683.0"
	I1018 09:38:30.300541   50493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/99562.pem && ln -fs /usr/share/ca-certificates/99562.pem /etc/ssl/certs/99562.pem"
	I1018 09:38:30.315091   50493 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/99562.pem
	I1018 09:38:30.321434   50493 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 08:38 /usr/share/ca-certificates/99562.pem
	I1018 09:38:30.321493   50493 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/99562.pem
	I1018 09:38:30.330337   50493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/99562.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:38:30.347256   50493 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:38:30.352795   50493 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 09:38:30.352850   50493 kubeadm.go:400] StartCluster: {Name:cert-options-586276 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
34.1 ClusterName:cert-options-586276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.94 Port:8555 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:38:30.352951   50493 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:38:30.353040   50493 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:38:30.400699   50493 cri.go:89] found id: ""
	I1018 09:38:30.400771   50493 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:38:30.414265   50493 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 09:38:30.427524   50493 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 09:38:30.441985   50493 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 09:38:30.441995   50493 kubeadm.go:157] found existing configuration files:
	
	I1018 09:38:30.442045   50493 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/admin.conf
	I1018 09:38:30.456624   50493 kubeadm.go:163] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 09:38:30.456687   50493 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 09:38:30.471244   50493 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/kubelet.conf
	I1018 09:38:30.487574   50493 kubeadm.go:163] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 09:38:30.487621   50493 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 09:38:30.503668   50493 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/controller-manager.conf
	I1018 09:38:30.520458   50493 kubeadm.go:163] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 09:38:30.520504   50493 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 09:38:30.543031   50493 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/scheduler.conf
	I1018 09:38:30.565528   50493 kubeadm.go:163] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 09:38:30.565591   50493 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 09:38:30.584828   50493 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1018 09:38:30.746666   50493 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 09:38:32.707170   50299 addons.go:514] duration metric: took 2.910506ms for enable addons: enabled=[]
	I1018 09:38:32.707214   50299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:38:32.934130   50299 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:38:32.958434   50299 node_ready.go:35] waiting up to 6m0s for node "pause-251981" to be "Ready" ...
	I1018 09:38:32.964866   50299 node_ready.go:49] node "pause-251981" is "Ready"
	I1018 09:38:32.964908   50299 node_ready.go:38] duration metric: took 6.419ms for node "pause-251981" to be "Ready" ...
	I1018 09:38:32.964938   50299 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:38:32.965003   50299 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:38:32.986798   50299 api_server.go:72] duration metric: took 282.558907ms to wait for apiserver process to appear ...
	I1018 09:38:32.986827   50299 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:38:32.986847   50299 api_server.go:253] Checking apiserver healthz at https://192.168.72.16:8443/healthz ...
	I1018 09:38:32.993738   50299 api_server.go:279] https://192.168.72.16:8443/healthz returned 200:
	ok
	I1018 09:38:32.995451   50299 api_server.go:141] control plane version: v1.34.1
	I1018 09:38:32.995475   50299 api_server.go:131] duration metric: took 8.641828ms to wait for apiserver health ...
	I1018 09:38:32.995483   50299 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:38:32.999457   50299 system_pods.go:59] 6 kube-system pods found
	I1018 09:38:32.999498   50299 system_pods.go:61] "coredns-66bc5c9577-gkqrn" [80039a0f-d663-4568-85a8-f35ea7394b79] Running
	I1018 09:38:32.999514   50299 system_pods.go:61] "etcd-pause-251981" [25e83f85-43a4-429d-964f-b4f7ad608035] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:38:32.999526   50299 system_pods.go:61] "kube-apiserver-pause-251981" [9eb65d8c-370f-4632-ab3f-8210dd4d618d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:38:32.999537   50299 system_pods.go:61] "kube-controller-manager-pause-251981" [2e054d57-0eb7-40d9-a60e-dfdb0750b7cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:38:32.999542   50299 system_pods.go:61] "kube-proxy-hh69n" [91ff45f3-e63f-4bc3-8bf8-d805a6f89864] Running
	I1018 09:38:32.999549   50299 system_pods.go:61] "kube-scheduler-pause-251981" [96ff66b7-5eab-409e-bb20-9fa19d294edf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:38:32.999559   50299 system_pods.go:74] duration metric: took 4.069236ms to wait for pod list to return data ...
	I1018 09:38:32.999575   50299 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:38:33.001771   50299 default_sa.go:45] found service account: "default"
	I1018 09:38:33.001799   50299 default_sa.go:55] duration metric: took 2.215763ms for default service account to be created ...
	I1018 09:38:33.001810   50299 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:38:33.007534   50299 system_pods.go:86] 6 kube-system pods found
	I1018 09:38:33.007568   50299 system_pods.go:89] "coredns-66bc5c9577-gkqrn" [80039a0f-d663-4568-85a8-f35ea7394b79] Running
	I1018 09:38:33.007581   50299 system_pods.go:89] "etcd-pause-251981" [25e83f85-43a4-429d-964f-b4f7ad608035] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:38:33.007592   50299 system_pods.go:89] "kube-apiserver-pause-251981" [9eb65d8c-370f-4632-ab3f-8210dd4d618d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:38:33.007603   50299 system_pods.go:89] "kube-controller-manager-pause-251981" [2e054d57-0eb7-40d9-a60e-dfdb0750b7cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:38:33.007610   50299 system_pods.go:89] "kube-proxy-hh69n" [91ff45f3-e63f-4bc3-8bf8-d805a6f89864] Running
	I1018 09:38:33.007617   50299 system_pods.go:89] "kube-scheduler-pause-251981" [96ff66b7-5eab-409e-bb20-9fa19d294edf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:38:33.007633   50299 system_pods.go:126] duration metric: took 5.810287ms to wait for k8s-apps to be running ...
	I1018 09:38:33.007649   50299 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:38:33.007704   50299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:38:33.029454   50299 system_svc.go:56] duration metric: took 21.794907ms WaitForService to wait for kubelet
	I1018 09:38:33.029489   50299 kubeadm.go:586] duration metric: took 325.254763ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:38:33.029535   50299 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:38:33.033992   50299 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1018 09:38:33.034019   50299 node_conditions.go:123] node cpu capacity is 2
	I1018 09:38:33.034032   50299 node_conditions.go:105] duration metric: took 4.491797ms to run NodePressure ...
	I1018 09:38:33.034047   50299 start.go:241] waiting for startup goroutines ...
	I1018 09:38:33.034059   50299 start.go:246] waiting for cluster config update ...
	I1018 09:38:33.034070   50299 start.go:255] writing updated cluster config ...
	I1018 09:38:33.034457   50299 ssh_runner.go:195] Run: rm -f paused
	I1018 09:38:33.040270   50299 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:38:33.040883   50299 kapi.go:59] client config for pause-251981: &rest.Config{Host:"https://192.168.72.16:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21767-6063/.minikube/profiles/pause-251981/client.crt", KeyFile:"/home/jenkins/minikube-integration/21767-6063/.minikube/profiles/pause-251981/client.key", CAFile:"/home/jenkins/minikube-integration/21767-6063/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 09:38:33.044850   50299 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gkqrn" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:38:33.051468   50299 pod_ready.go:94] pod "coredns-66bc5c9577-gkqrn" is "Ready"
	I1018 09:38:33.051493   50299 pod_ready.go:86] duration metric: took 6.621705ms for pod "coredns-66bc5c9577-gkqrn" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:38:33.054310   50299 pod_ready.go:83] waiting for pod "etcd-pause-251981" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:38:32.008424   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:32.009292   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | no network interface addresses found for domain old-k8s-version-874951 (source=lease)
	I1018 09:38:32.009320   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | trying to list again with source=arp
	I1018 09:38:32.009741   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | unable to find current IP address of domain old-k8s-version-874951 in network mk-old-k8s-version-874951 (interfaces detected: [])
	I1018 09:38:32.009772   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | I1018 09:38:32.009710   51090 retry.go:31] will retry after 2.054068386s: waiting for domain to come up
	I1018 09:38:34.065400   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:34.066115   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | no network interface addresses found for domain old-k8s-version-874951 (source=lease)
	I1018 09:38:34.066138   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | trying to list again with source=arp
	I1018 09:38:34.066492   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | unable to find current IP address of domain old-k8s-version-874951 in network mk-old-k8s-version-874951 (interfaces detected: [])
	I1018 09:38:34.066548   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | I1018 09:38:34.066486   51090 retry.go:31] will retry after 2.671692824s: waiting for domain to come up
	W1018 09:38:35.061275   50299 pod_ready.go:104] pod "etcd-pause-251981" is not "Ready", error: <nil>
	W1018 09:38:37.062629   50299 pod_ready.go:104] pod "etcd-pause-251981" is not "Ready", error: <nil>
	I1018 09:38:38.061969   50299 pod_ready.go:94] pod "etcd-pause-251981" is "Ready"
	I1018 09:38:38.062006   50299 pod_ready.go:86] duration metric: took 5.007666081s for pod "etcd-pause-251981" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:38:38.065893   50299 pod_ready.go:83] waiting for pod "kube-apiserver-pause-251981" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:38:36.741151   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:36.741820   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | no network interface addresses found for domain old-k8s-version-874951 (source=lease)
	I1018 09:38:36.741841   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | trying to list again with source=arp
	I1018 09:38:36.742252   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | unable to find current IP address of domain old-k8s-version-874951 in network mk-old-k8s-version-874951 (interfaces detected: [])
	I1018 09:38:36.742274   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | I1018 09:38:36.742230   51090 retry.go:31] will retry after 3.121124099s: waiting for domain to come up
	I1018 09:38:39.866388   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:39.867076   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | no network interface addresses found for domain old-k8s-version-874951 (source=lease)
	I1018 09:38:39.867097   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | trying to list again with source=arp
	I1018 09:38:39.867424   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | unable to find current IP address of domain old-k8s-version-874951 in network mk-old-k8s-version-874951 (interfaces detected: [])
	I1018 09:38:39.867450   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | I1018 09:38:39.867375   51090 retry.go:31] will retry after 2.73055378s: waiting for domain to come up
	I1018 09:38:42.815585   50493 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 09:38:42.815687   50493 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 09:38:42.815811   50493 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 09:38:42.815978   50493 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 09:38:42.816074   50493 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 09:38:42.816121   50493 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 09:38:42.817951   50493 out.go:252]   - Generating certificates and keys ...
	I1018 09:38:42.818021   50493 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 09:38:42.818078   50493 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 09:38:42.818129   50493 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 09:38:42.818180   50493 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 09:38:42.818252   50493 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 09:38:42.818324   50493 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 09:38:42.818387   50493 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 09:38:42.818559   50493 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [cert-options-586276 localhost] and IPs [192.168.50.94 127.0.0.1 ::1]
	I1018 09:38:42.818632   50493 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 09:38:42.818818   50493 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [cert-options-586276 localhost] and IPs [192.168.50.94 127.0.0.1 ::1]
	I1018 09:38:42.818875   50493 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 09:38:42.818964   50493 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 09:38:42.819025   50493 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 09:38:42.819099   50493 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 09:38:42.819155   50493 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 09:38:42.819241   50493 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 09:38:42.819307   50493 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 09:38:42.819368   50493 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 09:38:42.819462   50493 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 09:38:42.819591   50493 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 09:38:42.819682   50493 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 09:38:42.823190   50493 out.go:252]   - Booting up control plane ...
	I1018 09:38:42.823296   50493 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 09:38:42.823358   50493 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 09:38:42.823414   50493 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 09:38:42.823558   50493 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 09:38:42.823660   50493 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 09:38:42.823774   50493 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 09:38:42.823861   50493 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 09:38:42.823910   50493 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 09:38:42.824066   50493 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 09:38:42.824203   50493 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 09:38:42.824252   50493 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501539968s
	I1018 09:38:42.824327   50493 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 09:38:42.824401   50493 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.50.94:8555/livez
	I1018 09:38:42.824467   50493 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 09:38:42.824526   50493 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 09:38:42.824583   50493 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.546621481s
	I1018 09:38:42.824632   50493 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.294311329s
	I1018 09:38:42.824685   50493 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.5035457s
	I1018 09:38:42.824764   50493 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 09:38:42.824861   50493 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 09:38:42.824904   50493 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 09:38:42.825130   50493 kubeadm.go:318] [mark-control-plane] Marking the node cert-options-586276 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 09:38:42.825179   50493 kubeadm.go:318] [bootstrap-token] Using token: zrljar.4h4c6d1v50kr6sxp
	I1018 09:38:42.826734   50493 out.go:252]   - Configuring RBAC rules ...
	I1018 09:38:42.826818   50493 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 09:38:42.826885   50493 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 09:38:42.827044   50493 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 09:38:42.827238   50493 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 09:38:42.827392   50493 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 09:38:42.827485   50493 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 09:38:42.827643   50493 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 09:38:42.827685   50493 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 09:38:42.827722   50493 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 09:38:42.827725   50493 kubeadm.go:318] 
	I1018 09:38:42.827777   50493 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 09:38:42.827781   50493 kubeadm.go:318] 
	I1018 09:38:42.827873   50493 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 09:38:42.827883   50493 kubeadm.go:318] 
	I1018 09:38:42.827929   50493 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 09:38:42.828020   50493 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 09:38:42.828097   50493 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 09:38:42.828101   50493 kubeadm.go:318] 
	I1018 09:38:42.828187   50493 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 09:38:42.828199   50493 kubeadm.go:318] 
	I1018 09:38:42.828256   50493 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 09:38:42.828259   50493 kubeadm.go:318] 
	I1018 09:38:42.828304   50493 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 09:38:42.828361   50493 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 09:38:42.828417   50493 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 09:38:42.828420   50493 kubeadm.go:318] 
	I1018 09:38:42.828510   50493 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 09:38:42.828625   50493 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 09:38:42.828632   50493 kubeadm.go:318] 
	I1018 09:38:42.828754   50493 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8555 --token zrljar.4h4c6d1v50kr6sxp \
	I1018 09:38:42.828881   50493 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c4d60fb4a1ceaafe1b1d4013b4f6ceb431304abfc1a8d1095fcadfbdc8e3b7b9 \
	I1018 09:38:42.828906   50493 kubeadm.go:318] 	--control-plane 
	I1018 09:38:42.828912   50493 kubeadm.go:318] 
	I1018 09:38:42.829027   50493 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 09:38:42.829030   50493 kubeadm.go:318] 
	I1018 09:38:42.829097   50493 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8555 --token zrljar.4h4c6d1v50kr6sxp \
	I1018 09:38:42.829194   50493 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c4d60fb4a1ceaafe1b1d4013b4f6ceb431304abfc1a8d1095fcadfbdc8e3b7b9 
	I1018 09:38:42.829200   50493 cni.go:84] Creating CNI manager for ""
	I1018 09:38:42.829206   50493 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 09:38:42.832551   50493 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1018 09:38:42.833731   50493 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1018 09:38:42.848276   50493 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1018 09:38:42.877107   50493 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 09:38:42.877196   50493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:38:42.877241   50493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes cert-options-586276 minikube.k8s.io/updated_at=2025_10_18T09_38_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820 minikube.k8s.io/name=cert-options-586276 minikube.k8s.io/primary=true
	I1018 09:38:43.084459   50493 kubeadm.go:1113] duration metric: took 207.316826ms to wait for elevateKubeSystemPrivileges
	I1018 09:38:43.084507   50493 ops.go:34] apiserver oom_adj: -16
	I1018 09:38:43.117952   50493 kubeadm.go:402] duration metric: took 12.765072109s to StartCluster
	I1018 09:38:43.117989   50493 settings.go:142] acquiring lock: {Name:mk5c51ba919dd454ddb697f518b92637a3560487 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:38:43.118105   50493 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-6063/kubeconfig
	I1018 09:38:43.119392   50493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-6063/kubeconfig: {Name:mkb340db398364bcc27d468da7444ccfad7b82c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:38:43.119629   50493 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 09:38:43.119636   50493 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.94 Port:8555 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:38:43.119704   50493 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:38:43.119782   50493 addons.go:69] Setting storage-provisioner=true in profile "cert-options-586276"
	I1018 09:38:43.119825   50493 addons.go:238] Setting addon storage-provisioner=true in "cert-options-586276"
	I1018 09:38:43.119838   50493 config.go:182] Loaded profile config "cert-options-586276": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:38:43.119834   50493 addons.go:69] Setting default-storageclass=true in profile "cert-options-586276"
	I1018 09:38:43.119855   50493 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-options-586276"
	I1018 09:38:43.119860   50493 host.go:66] Checking if "cert-options-586276" exists ...
	I1018 09:38:43.120297   50493 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:38:43.120323   50493 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:38:43.120326   50493 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:38:43.120358   50493 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:38:43.124191   50493 out.go:179] * Verifying Kubernetes components...
	I1018 09:38:43.127337   50493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:38:43.139550   50493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43613
	I1018 09:38:43.139802   50493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33535
	I1018 09:38:43.140120   50493 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:38:43.140380   50493 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:38:43.140592   50493 main.go:141] libmachine: Using API Version  1
	I1018 09:38:43.140605   50493 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:38:43.140775   50493 main.go:141] libmachine: Using API Version  1
	I1018 09:38:43.140786   50493 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:38:43.141056   50493 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:38:43.141145   50493 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:38:43.141271   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetState
	I1018 09:38:43.141712   50493 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:38:43.141752   50493 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:38:43.145024   50493 addons.go:238] Setting addon default-storageclass=true in "cert-options-586276"
	I1018 09:38:43.145071   50493 host.go:66] Checking if "cert-options-586276" exists ...
	I1018 09:38:43.145345   50493 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:38:43.145374   50493 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:38:43.159218   50493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39673
	I1018 09:38:43.159783   50493 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:38:43.160361   50493 main.go:141] libmachine: Using API Version  1
	I1018 09:38:43.160379   50493 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:38:43.160759   50493 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:38:43.161047   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetState
	I1018 09:38:43.162737   50493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44111
	I1018 09:38:43.163213   50493 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:38:43.164295   50493 main.go:141] libmachine: Using API Version  1
	I1018 09:38:43.164306   50493 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:38:43.164323   50493 main.go:141] libmachine: (cert-options-586276) Calling .DriverName
	I1018 09:38:43.164703   50493 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:38:43.165357   50493 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:38:43.165409   50493 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:38:43.166543   50493 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1018 09:38:40.074266   50299 pod_ready.go:104] pod "kube-apiserver-pause-251981" is not "Ready", error: <nil>
	W1018 09:38:42.573531   50299 pod_ready.go:104] pod "kube-apiserver-pause-251981" is not "Ready", error: <nil>
	I1018 09:38:43.168027   50493 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:38:43.168038   50493 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:38:43.168058   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHHostname
	I1018 09:38:43.171810   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has defined MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:43.172291   50493 main.go:141] libmachine: (cert-options-586276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b2:c5", ip: ""} in network mk-cert-options-586276: {Iface:virbr2 ExpiryTime:2025-10-18 10:38:17 +0000 UTC Type:0 Mac:52:54:00:a3:b2:c5 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:cert-options-586276 Clientid:01:52:54:00:a3:b2:c5}
	I1018 09:38:43.172471   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has defined IP address 192.168.50.94 and MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:43.173108   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHPort
	I1018 09:38:43.173298   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHKeyPath
	I1018 09:38:43.173458   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHUsername
	I1018 09:38:43.173592   50493 sshutil.go:53] new ssh client: &{IP:192.168.50.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/cert-options-586276/id_rsa Username:docker}
	I1018 09:38:43.182083   50493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43187
	I1018 09:38:43.182586   50493 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:38:43.183262   50493 main.go:141] libmachine: Using API Version  1
	I1018 09:38:43.183276   50493 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:38:43.184367   50493 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:38:43.184561   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetState
	I1018 09:38:43.186632   50493 main.go:141] libmachine: (cert-options-586276) Calling .DriverName
	I1018 09:38:43.186909   50493 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:38:43.186930   50493 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:38:43.186948   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHHostname
	I1018 09:38:43.190411   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has defined MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:43.190767   50493 main.go:141] libmachine: (cert-options-586276) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:b2:c5", ip: ""} in network mk-cert-options-586276: {Iface:virbr2 ExpiryTime:2025-10-18 10:38:17 +0000 UTC Type:0 Mac:52:54:00:a3:b2:c5 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:cert-options-586276 Clientid:01:52:54:00:a3:b2:c5}
	I1018 09:38:43.190821   50493 main.go:141] libmachine: (cert-options-586276) DBG | domain cert-options-586276 has defined IP address 192.168.50.94 and MAC address 52:54:00:a3:b2:c5 in network mk-cert-options-586276
	I1018 09:38:43.191070   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHPort
	I1018 09:38:43.191244   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHKeyPath
	I1018 09:38:43.191544   50493 main.go:141] libmachine: (cert-options-586276) Calling .GetSSHUsername
	I1018 09:38:43.191690   50493 sshutil.go:53] new ssh client: &{IP:192.168.50.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/cert-options-586276/id_rsa Username:docker}
	I1018 09:38:43.471226   50493 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 09:38:43.555752   50493 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:38:43.833414   50493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:38:43.835228   50493 start.go:976] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1018 09:38:43.836432   50493 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:38:43.836487   50493 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:38:43.875612   50493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:38:44.029851   50493 main.go:141] libmachine: Making call to close driver server
	I1018 09:38:44.029870   50493 main.go:141] libmachine: (cert-options-586276) Calling .Close
	I1018 09:38:44.029895   50493 api_server.go:72] duration metric: took 910.23545ms to wait for apiserver process to appear ...
	I1018 09:38:44.029909   50493 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:38:44.029953   50493 api_server.go:253] Checking apiserver healthz at https://192.168.50.94:8555/healthz ...
	I1018 09:38:44.030260   50493 main.go:141] libmachine: (cert-options-586276) DBG | Closing plugin on server side
	I1018 09:38:44.030276   50493 main.go:141] libmachine: Successfully made call to close driver server
	I1018 09:38:44.030287   50493 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 09:38:44.030295   50493 main.go:141] libmachine: Making call to close driver server
	I1018 09:38:44.030317   50493 main.go:141] libmachine: (cert-options-586276) Calling .Close
	I1018 09:38:44.030620   50493 main.go:141] libmachine: Successfully made call to close driver server
	I1018 09:38:44.030618   50493 main.go:141] libmachine: (cert-options-586276) DBG | Closing plugin on server side
	I1018 09:38:44.030628   50493 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 09:38:44.046582   50493 api_server.go:279] https://192.168.50.94:8555/healthz returned 200:
	ok
	I1018 09:38:44.050653   50493 api_server.go:141] control plane version: v1.34.1
	I1018 09:38:44.050673   50493 api_server.go:131] duration metric: took 20.758396ms to wait for apiserver health ...
	I1018 09:38:44.050686   50493 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:38:44.051135   50493 main.go:141] libmachine: Making call to close driver server
	I1018 09:38:44.051149   50493 main.go:141] libmachine: (cert-options-586276) Calling .Close
	I1018 09:38:44.051465   50493 main.go:141] libmachine: Successfully made call to close driver server
	I1018 09:38:44.051477   50493 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 09:38:44.062432   50493 system_pods.go:59] 4 kube-system pods found
	I1018 09:38:44.062459   50493 system_pods.go:61] "etcd-cert-options-586276" [67edd06e-9904-46d6-9e52-56c04b6f94a0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:38:44.062469   50493 system_pods.go:61] "kube-apiserver-cert-options-586276" [03199a05-896b-4884-a2f4-df3acffcbfdf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:38:44.062478   50493 system_pods.go:61] "kube-controller-manager-cert-options-586276" [c74e2b03-34a4-4b48-9add-42cd736275c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:38:44.062484   50493 system_pods.go:61] "kube-scheduler-cert-options-586276" [3572c713-7f0c-49e5-9bf7-0e5f04de0e70] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:38:44.062490   50493 system_pods.go:74] duration metric: took 11.798904ms to wait for pod list to return data ...
	I1018 09:38:44.062510   50493 kubeadm.go:586] duration metric: took 942.844297ms to wait for: map[apiserver:true system_pods:true]
	I1018 09:38:44.062525   50493 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:38:44.065890   50493 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1018 09:38:44.065908   50493 node_conditions.go:123] node cpu capacity is 2
	I1018 09:38:44.065943   50493 node_conditions.go:105] duration metric: took 3.413799ms to run NodePressure ...
	I1018 09:38:44.065956   50493 start.go:241] waiting for startup goroutines ...
	I1018 09:38:44.326181   50493 main.go:141] libmachine: Making call to close driver server
	I1018 09:38:44.326198   50493 main.go:141] libmachine: (cert-options-586276) Calling .Close
	I1018 09:38:44.326530   50493 main.go:141] libmachine: (cert-options-586276) DBG | Closing plugin on server side
	I1018 09:38:44.326558   50493 main.go:141] libmachine: Successfully made call to close driver server
	I1018 09:38:44.326570   50493 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 09:38:44.326584   50493 main.go:141] libmachine: Making call to close driver server
	I1018 09:38:44.326591   50493 main.go:141] libmachine: (cert-options-586276) Calling .Close
	I1018 09:38:44.326866   50493 main.go:141] libmachine: Successfully made call to close driver server
	I1018 09:38:44.326876   50493 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 09:38:44.329179   50493 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1018 09:38:44.330693   50493 addons.go:514] duration metric: took 1.21099445s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1018 09:38:44.339883   50493 kapi.go:214] "coredns" deployment in "kube-system" namespace and "cert-options-586276" context rescaled to 1 replicas
	I1018 09:38:44.339915   50493 start.go:246] waiting for cluster config update ...
	I1018 09:38:44.339941   50493 start.go:255] writing updated cluster config ...
	I1018 09:38:44.340336   50493 ssh_runner.go:195] Run: rm -f paused
	I1018 09:38:44.396949   50493 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 09:38:44.399076   50493 out.go:179] * Done! kubectl is now configured to use "cert-options-586276" cluster and "default" namespace by default
	I1018 09:38:44.073166   50299 pod_ready.go:94] pod "kube-apiserver-pause-251981" is "Ready"
	I1018 09:38:44.073209   50299 pod_ready.go:86] duration metric: took 6.007269364s for pod "kube-apiserver-pause-251981" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:38:44.076362   50299 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-251981" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:38:45.083816   50299 pod_ready.go:94] pod "kube-controller-manager-pause-251981" is "Ready"
	I1018 09:38:45.083851   50299 pod_ready.go:86] duration metric: took 1.007459311s for pod "kube-controller-manager-pause-251981" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:38:45.086652   50299 pod_ready.go:83] waiting for pod "kube-proxy-hh69n" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:38:45.092907   50299 pod_ready.go:94] pod "kube-proxy-hh69n" is "Ready"
	I1018 09:38:45.092966   50299 pod_ready.go:86] duration metric: took 6.287525ms for pod "kube-proxy-hh69n" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:38:45.096465   50299 pod_ready.go:83] waiting for pod "kube-scheduler-pause-251981" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:38:45.103474   50299 pod_ready.go:94] pod "kube-scheduler-pause-251981" is "Ready"
	I1018 09:38:45.103511   50299 pod_ready.go:86] duration metric: took 7.015123ms for pod "kube-scheduler-pause-251981" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:38:45.103529   50299 pod_ready.go:40] duration metric: took 12.063221614s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:38:45.157953   50299 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 09:38:45.160160   50299 out.go:179] * Done! kubectl is now configured to use "pause-251981" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 09:38:46 pause-251981 crio[2808]: time="2025-10-18 09:38:46.069076415Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ca796d7b-80d8-43c9-9c10-4d5097d412ef name=/runtime.v1.RuntimeService/Version
	Oct 18 09:38:46 pause-251981 crio[2808]: time="2025-10-18 09:38:46.070640653Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4af32d7d-50da-4dae-acc7-0f6701b0df3f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 09:38:46 pause-251981 crio[2808]: time="2025-10-18 09:38:46.071292274Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760780326071254146,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4af32d7d-50da-4dae-acc7-0f6701b0df3f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 09:38:46 pause-251981 crio[2808]: time="2025-10-18 09:38:46.072154668Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=026b3985-f010-4bbc-9d35-a2c4f8e23e62 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:38:46 pause-251981 crio[2808]: time="2025-10-18 09:38:46.072501444Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=026b3985-f010-4bbc-9d35-a2c4f8e23e62 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:38:46 pause-251981 crio[2808]: time="2025-10-18 09:38:46.073169538Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:892b972e09f42c98bce80d50a7138fa3b1c7f065a30b709d83c34a1be5641266,PodSandboxId:bc13c2072329b53d87a66bc042d13c327338a7265c42fa170c387eaf90aca7a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760780311203743281,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hh69n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91ff45f3-e63f-4bc3-8bf8-d805a6f89864,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e627267a6b15f9dab3004645432031a2cb9b36cc81e305dd7962c2f8fd477595,PodSandboxId:76e5fecbb9a139ab265bddec8feb0944f10440e52f9cb7a494a3d6c700e132b2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760780305439137500,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ef8d22f0acfad161fd7159db2ab3aaa,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kuberne
tes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcb2d4664bef892714e316cbc572e8176f8bfdd60aa897517075752efe53ee19,PodSandboxId:49112bf07bb4fab709b8c109c39bbc8b51964b95fb8c72dc739bfa5360780e21,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760780305454024396,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896068ef5175d9af2bc27f8f789b5ff4,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.port
s: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a752a128dd5424649f37cb4d8a0b6942b129d7c8521ee000cfef644d829c1465,PodSandboxId:79390eaf4de1648fee543465081e6e27295387e9e5f82a618f39487d384809da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760780305478293962,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
796b7d70b0f8a722cf83fe465c4b2017,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d50dc2290ce14d7255ceeee4d0319c02fbeb45ec099e5320cc8ed64adfe1f6ea,PodSandboxId:c7058358f880617efa02f2d27e9303284b35433df535d99b1b5187a7072d7f5b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760780305397155654,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernet
es.pod.name: kube-apiserver-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58baf79f62aa5f6561d388f3289f8931,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:491f718eb22fdfde908afbfc80efb672e3b0add928d31a9889c118be7ccb2c74,PodSandboxId:44f3a3c9e0095aadfac61abe79a1b3390b096d88e35c84458b01cd930d104b69,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17607
80290010005371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gkqrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80039a0f-d663-4568-85a8-f35ea7394b79,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc012585626724b3556d7a6649f793af6fb0753ed46c789bce7cb3ea391413bb,PodSandboxId:bc13c2072329b53d87a66bc042d13c327338a7265c42fa
170c387eaf90aca7a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760780289299644884,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hh69n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91ff45f3-e63f-4bc3-8bf8-d805a6f89864,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4129f2037f9f283b2dcc43160d4a1cceb6aa28b44ee94c719f0b35e305d9b153,PodSandboxId:c7058358f880617efa02f2d27e9303284b35433df535d99b1b5187a7072d7f5b,Metadata:&Conta
inerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760780289171309478,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58baf79f62aa5f6561d388f3289f8931,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:201cd2c1e158f4cd558f4e00ed2a29b45d3ac5d6699b526aa3173435
fc58e0e7,PodSandboxId:79390eaf4de1648fee543465081e6e27295387e9e5f82a618f39487d384809da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760780289121980073,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 796b7d70b0f8a722cf83fe465c4b2017,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02ee425859eef16741c99f5436243e9681dfb3da34b4721472f7361d29ba47dc,PodSandboxId:76e5fecbb9a139ab265bddec8feb0944f10440e52f9cb7a494a3d6c700e132b2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760780289080277787,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ef8d22f0acfad161fd7159db2ab3aaa,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a58eff517402787ff5f9c7a733a3863ce25d66a53c0f5e8fd09bd24cfa911d5,PodSandboxId:49112bf07bb4fab709b8c109c39bbc8b51964b95fb8c72dc739bfa5360780e21,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760780289074322882,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896068ef5175d9af2bc27f8f789b5ff4,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259
,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2a9247b81b38d66905d17b1c7125eaaca22be0c11e84329c01545bf8a63d3f7,PodSandboxId:bb01f32caae719bbc3469508464a92bb8a47233ccc330705fa6ae20a98e3a7a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760780224492759843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gkqrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80039a0f-d663-4568-85a8-f35ea7394b79,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubern
etes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=026b3985-f010-4bbc-9d35-a2c4f8e23e62 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:38:46 pause-251981 crio[2808]: time="2025-10-18 09:38:46.134136881Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6bb6b36b-1e1f-415f-af2c-7fe7e5db5b90 name=/runtime.v1.RuntimeService/Version
	Oct 18 09:38:46 pause-251981 crio[2808]: time="2025-10-18 09:38:46.134236114Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6bb6b36b-1e1f-415f-af2c-7fe7e5db5b90 name=/runtime.v1.RuntimeService/Version
	Oct 18 09:38:46 pause-251981 crio[2808]: time="2025-10-18 09:38:46.136089338Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3829453e-1e25-402f-8146-1110d23f8442 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 09:38:46 pause-251981 crio[2808]: time="2025-10-18 09:38:46.136920896Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760780326136876523,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3829453e-1e25-402f-8146-1110d23f8442 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 09:38:46 pause-251981 crio[2808]: time="2025-10-18 09:38:46.137749309Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eef4633b-9796-47b1-95f3-d45f44ae9e7d name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:38:46 pause-251981 crio[2808]: time="2025-10-18 09:38:46.137857596Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eef4633b-9796-47b1-95f3-d45f44ae9e7d name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:38:46 pause-251981 crio[2808]: time="2025-10-18 09:38:46.138380495Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:892b972e09f42c98bce80d50a7138fa3b1c7f065a30b709d83c34a1be5641266,PodSandboxId:bc13c2072329b53d87a66bc042d13c327338a7265c42fa170c387eaf90aca7a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760780311203743281,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hh69n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91ff45f3-e63f-4bc3-8bf8-d805a6f89864,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e627267a6b15f9dab3004645432031a2cb9b36cc81e305dd7962c2f8fd477595,PodSandboxId:76e5fecbb9a139ab265bddec8feb0944f10440e52f9cb7a494a3d6c700e132b2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760780305439137500,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ef8d22f0acfad161fd7159db2ab3aaa,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kuberne
tes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcb2d4664bef892714e316cbc572e8176f8bfdd60aa897517075752efe53ee19,PodSandboxId:49112bf07bb4fab709b8c109c39bbc8b51964b95fb8c72dc739bfa5360780e21,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760780305454024396,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896068ef5175d9af2bc27f8f789b5ff4,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.port
s: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a752a128dd5424649f37cb4d8a0b6942b129d7c8521ee000cfef644d829c1465,PodSandboxId:79390eaf4de1648fee543465081e6e27295387e9e5f82a618f39487d384809da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760780305478293962,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
796b7d70b0f8a722cf83fe465c4b2017,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d50dc2290ce14d7255ceeee4d0319c02fbeb45ec099e5320cc8ed64adfe1f6ea,PodSandboxId:c7058358f880617efa02f2d27e9303284b35433df535d99b1b5187a7072d7f5b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760780305397155654,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernet
es.pod.name: kube-apiserver-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58baf79f62aa5f6561d388f3289f8931,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:491f718eb22fdfde908afbfc80efb672e3b0add928d31a9889c118be7ccb2c74,PodSandboxId:44f3a3c9e0095aadfac61abe79a1b3390b096d88e35c84458b01cd930d104b69,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17607
80290010005371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gkqrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80039a0f-d663-4568-85a8-f35ea7394b79,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc012585626724b3556d7a6649f793af6fb0753ed46c789bce7cb3ea391413bb,PodSandboxId:bc13c2072329b53d87a66bc042d13c327338a7265c42fa
170c387eaf90aca7a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760780289299644884,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hh69n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91ff45f3-e63f-4bc3-8bf8-d805a6f89864,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4129f2037f9f283b2dcc43160d4a1cceb6aa28b44ee94c719f0b35e305d9b153,PodSandboxId:c7058358f880617efa02f2d27e9303284b35433df535d99b1b5187a7072d7f5b,Metadata:&Conta
inerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760780289171309478,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58baf79f62aa5f6561d388f3289f8931,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:201cd2c1e158f4cd558f4e00ed2a29b45d3ac5d6699b526aa3173435
fc58e0e7,PodSandboxId:79390eaf4de1648fee543465081e6e27295387e9e5f82a618f39487d384809da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760780289121980073,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 796b7d70b0f8a722cf83fe465c4b2017,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02ee425859eef16741c99f5436243e9681dfb3da34b4721472f7361d29ba47dc,PodSandboxId:76e5fecbb9a139ab265bddec8feb0944f10440e52f9cb7a494a3d6c700e132b2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760780289080277787,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ef8d22f0acfad161fd7159db2ab3aaa,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a58eff517402787ff5f9c7a733a3863ce25d66a53c0f5e8fd09bd24cfa911d5,PodSandboxId:49112bf07bb4fab709b8c109c39bbc8b51964b95fb8c72dc739bfa5360780e21,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760780289074322882,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896068ef5175d9af2bc27f8f789b5ff4,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259
,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2a9247b81b38d66905d17b1c7125eaaca22be0c11e84329c01545bf8a63d3f7,PodSandboxId:bb01f32caae719bbc3469508464a92bb8a47233ccc330705fa6ae20a98e3a7a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760780224492759843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gkqrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80039a0f-d663-4568-85a8-f35ea7394b79,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubern
etes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eef4633b-9796-47b1-95f3-d45f44ae9e7d name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:38:46 pause-251981 crio[2808]: time="2025-10-18 09:38:46.155061870Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=f8453e87-6322-46f4-a632-40fd33af9a56 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 18 09:38:46 pause-251981 crio[2808]: time="2025-10-18 09:38:46.155512439Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:44f3a3c9e0095aadfac61abe79a1b3390b096d88e35c84458b01cd930d104b69,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-gkqrn,Uid:80039a0f-d663-4568-85a8-f35ea7394b79,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1760780288755343508,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-gkqrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80039a0f-d663-4568-85a8-f35ea7394b79,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-18T09:37:03.545201402Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bc13c2072329b53d87a66bc042d13c327338a7265c42fa170c387eaf90aca7a5,Metadata:&PodSandboxMetadata{Name:kube-proxy-hh69n,Uid:91ff45f3-e63f-4bc3-8bf8-d805a6f89864,Namespace:kube-system,Attempt
:1,},State:SANDBOX_READY,CreatedAt:1760780288630638226,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-hh69n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91ff45f3-e63f-4bc3-8bf8-d805a6f89864,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-18T09:37:03.297759184Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:79390eaf4de1648fee543465081e6e27295387e9e5f82a618f39487d384809da,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-251981,Uid:796b7d70b0f8a722cf83fe465c4b2017,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1760780288603197194,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 796b7d70b0f8a722cf83fe465c4b2017,tier: control-pla
ne,},Annotations:map[string]string{kubernetes.io/config.hash: 796b7d70b0f8a722cf83fe465c4b2017,kubernetes.io/config.seen: 2025-10-18T09:36:57.243663822Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c7058358f880617efa02f2d27e9303284b35433df535d99b1b5187a7072d7f5b,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-251981,Uid:58baf79f62aa5f6561d388f3289f8931,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1760780288571087852,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58baf79f62aa5f6561d388f3289f8931,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.16:8443,kubernetes.io/config.hash: 58baf79f62aa5f6561d388f3289f8931,kubernetes.io/config.seen: 2025-10-18T09:36:57.243657981Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{
Id:76e5fecbb9a139ab265bddec8feb0944f10440e52f9cb7a494a3d6c700e132b2,Metadata:&PodSandboxMetadata{Name:etcd-pause-251981,Uid:4ef8d22f0acfad161fd7159db2ab3aaa,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1760780288559904104,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ef8d22f0acfad161fd7159db2ab3aaa,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.16:2379,kubernetes.io/config.hash: 4ef8d22f0acfad161fd7159db2ab3aaa,kubernetes.io/config.seen: 2025-10-18T09:36:57.243666060Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:49112bf07bb4fab709b8c109c39bbc8b51964b95fb8c72dc739bfa5360780e21,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-251981,Uid:896068ef5175d9af2bc27f8f789b5ff4,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1760780288514492681,Label
s:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896068ef5175d9af2bc27f8f789b5ff4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 896068ef5175d9af2bc27f8f789b5ff4,kubernetes.io/config.seen: 2025-10-18T09:36:57.243664960Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bb01f32caae719bbc3469508464a92bb8a47233ccc330705fa6ae20a98e3a7a6,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-gkqrn,Uid:80039a0f-d663-4568-85a8-f35ea7394b79,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1760780223886847323,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-gkqrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80039a0f-d663-4568-85a8-f35ea7394b79,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/c
onfig.seen: 2025-10-18T09:37:03.545201402Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0d83db5f4a8d0ca2ffca1b803fe8af20aaa27245b9391c6b70855367ce0d6332,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-57c6x,Uid:d41f82cc-f725-46f3-9937-4cb0ad9a4389,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1760780223819102157,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-57c6x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d41f82cc-f725-46f3-9937-4cb0ad9a4389,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-18T09:37:03.485820641Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=f8453e87-6322-46f4-a632-40fd33af9a56 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 18 09:38:46 pause-251981 crio[2808]: time="2025-10-18 09:38:46.157838105Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fee2c359-2baa-454c-9ec5-347c4106c6bb name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:38:46 pause-251981 crio[2808]: time="2025-10-18 09:38:46.158126124Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fee2c359-2baa-454c-9ec5-347c4106c6bb name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:38:46 pause-251981 crio[2808]: time="2025-10-18 09:38:46.158649352Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:892b972e09f42c98bce80d50a7138fa3b1c7f065a30b709d83c34a1be5641266,PodSandboxId:bc13c2072329b53d87a66bc042d13c327338a7265c42fa170c387eaf90aca7a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760780311203743281,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hh69n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91ff45f3-e63f-4bc3-8bf8-d805a6f89864,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e627267a6b15f9dab3004645432031a2cb9b36cc81e305dd7962c2f8fd477595,PodSandboxId:76e5fecbb9a139ab265bddec8feb0944f10440e52f9cb7a494a3d6c700e132b2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760780305439137500,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ef8d22f0acfad161fd7159db2ab3aaa,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kuberne
tes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcb2d4664bef892714e316cbc572e8176f8bfdd60aa897517075752efe53ee19,PodSandboxId:49112bf07bb4fab709b8c109c39bbc8b51964b95fb8c72dc739bfa5360780e21,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760780305454024396,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896068ef5175d9af2bc27f8f789b5ff4,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.port
s: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a752a128dd5424649f37cb4d8a0b6942b129d7c8521ee000cfef644d829c1465,PodSandboxId:79390eaf4de1648fee543465081e6e27295387e9e5f82a618f39487d384809da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760780305478293962,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
796b7d70b0f8a722cf83fe465c4b2017,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d50dc2290ce14d7255ceeee4d0319c02fbeb45ec099e5320cc8ed64adfe1f6ea,PodSandboxId:c7058358f880617efa02f2d27e9303284b35433df535d99b1b5187a7072d7f5b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760780305397155654,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernet
es.pod.name: kube-apiserver-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58baf79f62aa5f6561d388f3289f8931,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:491f718eb22fdfde908afbfc80efb672e3b0add928d31a9889c118be7ccb2c74,PodSandboxId:44f3a3c9e0095aadfac61abe79a1b3390b096d88e35c84458b01cd930d104b69,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17607
80290010005371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gkqrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80039a0f-d663-4568-85a8-f35ea7394b79,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc012585626724b3556d7a6649f793af6fb0753ed46c789bce7cb3ea391413bb,PodSandboxId:bc13c2072329b53d87a66bc042d13c327338a7265c42fa
170c387eaf90aca7a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760780289299644884,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hh69n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91ff45f3-e63f-4bc3-8bf8-d805a6f89864,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4129f2037f9f283b2dcc43160d4a1cceb6aa28b44ee94c719f0b35e305d9b153,PodSandboxId:c7058358f880617efa02f2d27e9303284b35433df535d99b1b5187a7072d7f5b,Metadata:&Conta
inerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760780289171309478,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58baf79f62aa5f6561d388f3289f8931,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:201cd2c1e158f4cd558f4e00ed2a29b45d3ac5d6699b526aa3173435
fc58e0e7,PodSandboxId:79390eaf4de1648fee543465081e6e27295387e9e5f82a618f39487d384809da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760780289121980073,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 796b7d70b0f8a722cf83fe465c4b2017,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02ee425859eef16741c99f5436243e9681dfb3da34b4721472f7361d29ba47dc,PodSandboxId:76e5fecbb9a139ab265bddec8feb0944f10440e52f9cb7a494a3d6c700e132b2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760780289080277787,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ef8d22f0acfad161fd7159db2ab3aaa,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a58eff517402787ff5f9c7a733a3863ce25d66a53c0f5e8fd09bd24cfa911d5,PodSandboxId:49112bf07bb4fab709b8c109c39bbc8b51964b95fb8c72dc739bfa5360780e21,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760780289074322882,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896068ef5175d9af2bc27f8f789b5ff4,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259
,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2a9247b81b38d66905d17b1c7125eaaca22be0c11e84329c01545bf8a63d3f7,PodSandboxId:bb01f32caae719bbc3469508464a92bb8a47233ccc330705fa6ae20a98e3a7a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760780224492759843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gkqrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80039a0f-d663-4568-85a8-f35ea7394b79,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubern
etes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fee2c359-2baa-454c-9ec5-347c4106c6bb name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:38:46 pause-251981 crio[2808]: time="2025-10-18 09:38:46.206983470Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ed9eb177-79ac-47d3-82f2-3584438908a4 name=/runtime.v1.RuntimeService/Version
	Oct 18 09:38:46 pause-251981 crio[2808]: time="2025-10-18 09:38:46.207313248Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ed9eb177-79ac-47d3-82f2-3584438908a4 name=/runtime.v1.RuntimeService/Version
	Oct 18 09:38:46 pause-251981 crio[2808]: time="2025-10-18 09:38:46.208991301Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a05670fa-5c48-478d-bb5e-e57cffdc8526 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 09:38:46 pause-251981 crio[2808]: time="2025-10-18 09:38:46.209722068Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760780326209686937,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a05670fa-5c48-478d-bb5e-e57cffdc8526 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 09:38:46 pause-251981 crio[2808]: time="2025-10-18 09:38:46.210361459Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3ceea0f7-6c16-4507-9d3e-89eed38977ac name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:38:46 pause-251981 crio[2808]: time="2025-10-18 09:38:46.210515681Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3ceea0f7-6c16-4507-9d3e-89eed38977ac name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:38:46 pause-251981 crio[2808]: time="2025-10-18 09:38:46.210963654Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:892b972e09f42c98bce80d50a7138fa3b1c7f065a30b709d83c34a1be5641266,PodSandboxId:bc13c2072329b53d87a66bc042d13c327338a7265c42fa170c387eaf90aca7a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760780311203743281,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hh69n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91ff45f3-e63f-4bc3-8bf8-d805a6f89864,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e627267a6b15f9dab3004645432031a2cb9b36cc81e305dd7962c2f8fd477595,PodSandboxId:76e5fecbb9a139ab265bddec8feb0944f10440e52f9cb7a494a3d6c700e132b2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760780305439137500,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ef8d22f0acfad161fd7159db2ab3aaa,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kuberne
tes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcb2d4664bef892714e316cbc572e8176f8bfdd60aa897517075752efe53ee19,PodSandboxId:49112bf07bb4fab709b8c109c39bbc8b51964b95fb8c72dc739bfa5360780e21,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760780305454024396,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896068ef5175d9af2bc27f8f789b5ff4,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.port
s: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a752a128dd5424649f37cb4d8a0b6942b129d7c8521ee000cfef644d829c1465,PodSandboxId:79390eaf4de1648fee543465081e6e27295387e9e5f82a618f39487d384809da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760780305478293962,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
796b7d70b0f8a722cf83fe465c4b2017,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d50dc2290ce14d7255ceeee4d0319c02fbeb45ec099e5320cc8ed64adfe1f6ea,PodSandboxId:c7058358f880617efa02f2d27e9303284b35433df535d99b1b5187a7072d7f5b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760780305397155654,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernet
es.pod.name: kube-apiserver-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58baf79f62aa5f6561d388f3289f8931,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:491f718eb22fdfde908afbfc80efb672e3b0add928d31a9889c118be7ccb2c74,PodSandboxId:44f3a3c9e0095aadfac61abe79a1b3390b096d88e35c84458b01cd930d104b69,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17607
80290010005371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gkqrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80039a0f-d663-4568-85a8-f35ea7394b79,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc012585626724b3556d7a6649f793af6fb0753ed46c789bce7cb3ea391413bb,PodSandboxId:bc13c2072329b53d87a66bc042d13c327338a7265c42fa
170c387eaf90aca7a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760780289299644884,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hh69n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91ff45f3-e63f-4bc3-8bf8-d805a6f89864,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4129f2037f9f283b2dcc43160d4a1cceb6aa28b44ee94c719f0b35e305d9b153,PodSandboxId:c7058358f880617efa02f2d27e9303284b35433df535d99b1b5187a7072d7f5b,Metadata:&Conta
inerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760780289171309478,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58baf79f62aa5f6561d388f3289f8931,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:201cd2c1e158f4cd558f4e00ed2a29b45d3ac5d6699b526aa3173435
fc58e0e7,PodSandboxId:79390eaf4de1648fee543465081e6e27295387e9e5f82a618f39487d384809da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760780289121980073,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 796b7d70b0f8a722cf83fe465c4b2017,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02ee425859eef16741c99f5436243e9681dfb3da34b4721472f7361d29ba47dc,PodSandboxId:76e5fecbb9a139ab265bddec8feb0944f10440e52f9cb7a494a3d6c700e132b2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760780289080277787,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ef8d22f0acfad161fd7159db2ab3aaa,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a58eff517402787ff5f9c7a733a3863ce25d66a53c0f5e8fd09bd24cfa911d5,PodSandboxId:49112bf07bb4fab709b8c109c39bbc8b51964b95fb8c72dc739bfa5360780e21,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760780289074322882,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896068ef5175d9af2bc27f8f789b5ff4,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259
,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2a9247b81b38d66905d17b1c7125eaaca22be0c11e84329c01545bf8a63d3f7,PodSandboxId:bb01f32caae719bbc3469508464a92bb8a47233ccc330705fa6ae20a98e3a7a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760780224492759843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gkqrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80039a0f-d663-4568-85a8-f35ea7394b79,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubern
etes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3ceea0f7-6c16-4507-9d3e-89eed38977ac name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	892b972e09f42       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   15 seconds ago       Running             kube-proxy                2                   bc13c2072329b       kube-proxy-hh69n
	a752a128dd542       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   20 seconds ago       Running             kube-controller-manager   2                   79390eaf4de16       kube-controller-manager-pause-251981
	dcb2d4664bef8       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   20 seconds ago       Running             kube-scheduler            2                   49112bf07bb4f       kube-scheduler-pause-251981
	e627267a6b15f       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   20 seconds ago       Running             etcd                      2                   76e5fecbb9a13       etcd-pause-251981
	d50dc2290ce14       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   20 seconds ago       Running             kube-apiserver            2                   c7058358f8806       kube-apiserver-pause-251981
	491f718eb22fd       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   36 seconds ago       Running             coredns                   1                   44f3a3c9e0095       coredns-66bc5c9577-gkqrn
	fc01258562672       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   37 seconds ago       Exited              kube-proxy                1                   bc13c2072329b       kube-proxy-hh69n
	4129f2037f9f2       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   37 seconds ago       Exited              kube-apiserver            1                   c7058358f8806       kube-apiserver-pause-251981
	201cd2c1e158f       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   37 seconds ago       Exited              kube-controller-manager   1                   79390eaf4de16       kube-controller-manager-pause-251981
	02ee425859eef       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   37 seconds ago       Exited              etcd                      1                   76e5fecbb9a13       etcd-pause-251981
	8a58eff517402       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   37 seconds ago       Exited              kube-scheduler            1                   49112bf07bb4f       kube-scheduler-pause-251981
	b2a9247b81b38       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   About a minute ago   Exited              coredns                   0                   bb01f32caae71       coredns-66bc5c9577-gkqrn
	
	
	==> coredns [491f718eb22fdfde908afbfc80efb672e3b0add928d31a9889c118be7ccb2c74] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43983 - 15372 "HINFO IN 6681702715175531830.7065157308043952480. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.079482259s
	
	
	==> coredns [b2a9247b81b38d66905d17b1c7125eaaca22be0c11e84329c01545bf8a63d3f7] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-251981
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-251981
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=pause-251981
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_36_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:36:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-251981
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:38:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:38:28 +0000   Sat, 18 Oct 2025 09:36:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:38:28 +0000   Sat, 18 Oct 2025 09:36:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:38:28 +0000   Sat, 18 Oct 2025 09:36:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:38:28 +0000   Sat, 18 Oct 2025 09:36:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.16
	  Hostname:    pause-251981
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 75698a6d3efc46e289dc91cd9c46d9b8
	  System UUID:                75698a6d-3efc-46e2-89dc-91cd9c46d9b8
	  Boot ID:                    a25046a1-fd19-4efa-a6e1-6f0b9b494494
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-gkqrn                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     103s
	  kube-system                 etcd-pause-251981                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         109s
	  kube-system                 kube-apiserver-pause-251981             250m (12%)    0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-pause-251981    200m (10%)    0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-proxy-hh69n                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-pause-251981             100m (5%)     0 (0%)      0 (0%)           0 (0%)         109s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 101s               kube-proxy       
	  Normal  Starting                 14s                kube-proxy       
	  Normal  NodeHasSufficientPID     109s               kubelet          Node pause-251981 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  109s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  109s               kubelet          Node pause-251981 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    109s               kubelet          Node pause-251981 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 109s               kubelet          Starting kubelet.
	  Normal  NodeReady                108s               kubelet          Node pause-251981 status is now: NodeReady
	  Normal  RegisteredNode           104s               node-controller  Node pause-251981 event: Registered Node pause-251981 in Controller
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21s (x8 over 22s)  kubelet          Node pause-251981 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 22s)  kubelet          Node pause-251981 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 22s)  kubelet          Node pause-251981 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12s                node-controller  Node pause-251981 event: Registered Node pause-251981 in Controller
	
	
	==> dmesg <==
	[Oct18 09:36] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000000] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001500] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.013233] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.190833] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000027] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000006] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.109400] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.115458] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.105095] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.159929] kauditd_printk_skb: 171 callbacks suppressed
	[Oct18 09:37] kauditd_printk_skb: 18 callbacks suppressed
	[ +10.713733] kauditd_printk_skb: 219 callbacks suppressed
	[ +21.700032] kauditd_printk_skb: 38 callbacks suppressed
	[Oct18 09:38] kauditd_printk_skb: 56 callbacks suppressed
	[  +0.140058] kauditd_printk_skb: 254 callbacks suppressed
	[  +6.723761] kauditd_printk_skb: 81 callbacks suppressed
	
	
	==> etcd [02ee425859eef16741c99f5436243e9681dfb3da34b4721472f7361d29ba47dc] <==
	{"level":"warn","ts":"2025-10-18T09:38:12.315462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:38:12.327569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:38:12.338650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:38:12.352329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:38:12.378305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:38:12.408672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:38:12.482662Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44646","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T09:38:21.314562Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-18T09:38:21.314812Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-251981","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.16:2380"],"advertise-client-urls":["https://192.168.72.16:2379"]}
	{"level":"error","ts":"2025-10-18T09:38:21.315074Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T09:38:21.315321Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T09:38:21.317381Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T09:38:21.317478Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"3a93d4f7634551e8","current-leader-member-id":"3a93d4f7634551e8"}
	{"level":"info","ts":"2025-10-18T09:38:21.317582Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-18T09:38:21.317595Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-18T09:38:21.318071Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.72.16:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T09:38:21.318256Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.72.16:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T09:38:21.318291Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.72.16:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-18T09:38:21.318224Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T09:38:21.318358Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T09:38:21.318383Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T09:38:21.325017Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.72.16:2380"}
	{"level":"error","ts":"2025-10-18T09:38:21.325171Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.72.16:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T09:38:21.325212Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.72.16:2380"}
	{"level":"info","ts":"2025-10-18T09:38:21.325232Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-251981","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.16:2380"],"advertise-client-urls":["https://192.168.72.16:2379"]}
	
	
	==> etcd [e627267a6b15f9dab3004645432031a2cb9b36cc81e305dd7962c2f8fd477595] <==
	{"level":"warn","ts":"2025-10-18T09:38:30.980991Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T09:38:30.076355Z","time spent":"904.6148ms","remote":"127.0.0.1:36506","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":28,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-251981\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-251981\" value_size:3288 >> failure:<>"}
	{"level":"warn","ts":"2025-10-18T09:38:30.981065Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"527.568901ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:basic-user\" limit:1 ","response":"range_response_count:1 size:678"}
	{"level":"info","ts":"2025-10-18T09:38:30.981092Z","caller":"traceutil/trace.go:172","msg":"trace[1902514108] range","detail":"{range_begin:/registry/clusterroles/system:basic-user; range_end:; response_count:1; response_revision:460; }","duration":"527.595863ms","start":"2025-10-18T09:38:30.453487Z","end":"2025-10-18T09:38:30.981083Z","steps":["trace[1902514108] 'agreement among raft nodes before linearized reading'  (duration: 527.512417ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T09:38:30.981115Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T09:38:30.453471Z","time spent":"527.637557ms","remote":"127.0.0.1:36860","response type":"/etcdserverpb.KV/Range","request count":0,"request size":44,"response count":1,"response size":701,"request content":"key:\"/registry/clusterroles/system:basic-user\" limit:1 "}
	{"level":"warn","ts":"2025-10-18T09:38:30.981199Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"528.045938ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-gkqrn\" limit:1 ","response":"range_response_count:1 size:5450"}
	{"level":"info","ts":"2025-10-18T09:38:30.981222Z","caller":"traceutil/trace.go:172","msg":"trace[54133699] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-gkqrn; range_end:; response_count:1; response_revision:460; }","duration":"528.069081ms","start":"2025-10-18T09:38:30.453146Z","end":"2025-10-18T09:38:30.981215Z","steps":["trace[54133699] 'agreement among raft nodes before linearized reading'  (duration: 528.000332ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T09:38:30.981246Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T09:38:30.453142Z","time spent":"528.09239ms","remote":"127.0.0.1:36506","response type":"/etcdserverpb.KV/Range","request count":0,"request size":55,"response count":1,"response size":5473,"request content":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-gkqrn\" limit:1 "}
	{"level":"info","ts":"2025-10-18T09:38:31.298385Z","caller":"traceutil/trace.go:172","msg":"trace[1031974631] linearizableReadLoop","detail":"{readStateIndex:496; appliedIndex:496; }","duration":"295.516453ms","start":"2025-10-18T09:38:31.002839Z","end":"2025-10-18T09:38:31.298356Z","steps":["trace[1031974631] 'read index received'  (duration: 295.490741ms)","trace[1031974631] 'applied index is now lower than readState.Index'  (duration: 24.513µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T09:38:31.298680Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"295.819825ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/view\" limit:1 ","response":"range_response_count:1 size:2208"}
	{"level":"info","ts":"2025-10-18T09:38:31.298705Z","caller":"traceutil/trace.go:172","msg":"trace[2060153418] range","detail":"{range_begin:/registry/clusterroles/view; range_end:; response_count:1; response_revision:460; }","duration":"295.864031ms","start":"2025-10-18T09:38:31.002834Z","end":"2025-10-18T09:38:31.298698Z","steps":["trace[2060153418] 'agreement among raft nodes before linearized reading'  (duration: 295.717559ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:38:31.298997Z","caller":"traceutil/trace.go:172","msg":"trace[2094374882] transaction","detail":"{read_only:false; response_revision:461; number_of_response:1; }","duration":"297.790673ms","start":"2025-10-18T09:38:31.001195Z","end":"2025-10-18T09:38:31.298985Z","steps":["trace[2094374882] 'process raft request'  (duration: 297.643431ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:38:31.503103Z","caller":"traceutil/trace.go:172","msg":"trace[861030731] linearizableReadLoop","detail":"{readStateIndex:497; appliedIndex:497; }","duration":"200.048267ms","start":"2025-10-18T09:38:31.302943Z","end":"2025-10-18T09:38:31.502991Z","steps":["trace[861030731] 'read index received'  (duration: 200.041117ms)","trace[861030731] 'applied index is now lower than readState.Index'  (duration: 6.017µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T09:38:31.575796Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"272.844706ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-admin\" limit:1 ","response":"range_response_count:1 size:840"}
	{"level":"info","ts":"2025-10-18T09:38:31.576127Z","caller":"traceutil/trace.go:172","msg":"trace[1881401895] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-admin; range_end:; response_count:1; response_revision:461; }","duration":"273.194912ms","start":"2025-10-18T09:38:31.302925Z","end":"2025-10-18T09:38:31.576120Z","steps":["trace[1881401895] 'agreement among raft nodes before linearized reading'  (duration: 200.857524ms)","trace[1881401895] 'range keys from in-memory index tree'  (duration: 71.868836ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T09:38:31.576205Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"271.830966ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-pause-251981\" limit:1 ","response":"range_response_count:1 size:7219"}
	{"level":"info","ts":"2025-10-18T09:38:31.576089Z","caller":"traceutil/trace.go:172","msg":"trace[705410322] transaction","detail":"{read_only:false; response_revision:462; number_of_response:1; }","duration":"379.766258ms","start":"2025-10-18T09:38:31.196308Z","end":"2025-10-18T09:38:31.576074Z","steps":["trace[705410322] 'process raft request'  (duration: 307.285464ms)","trace[705410322] 'compare'  (duration: 72.328732ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T09:38:31.579144Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T09:38:31.196286Z","time spent":"382.805233ms","remote":"127.0.0.1:36292","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":762,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kube-proxy-hh69n.186f8c5d3011ce84\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-proxy-hh69n.186f8c5d3011ce84\" value_size:682 lease:5902136596450740629 >> failure:<>"}
	{"level":"info","ts":"2025-10-18T09:38:31.581259Z","caller":"traceutil/trace.go:172","msg":"trace[1247655257] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-pause-251981; range_end:; response_count:1; response_revision:462; }","duration":"276.879486ms","start":"2025-10-18T09:38:31.304361Z","end":"2025-10-18T09:38:31.581241Z","steps":["trace[1247655257] 'agreement among raft nodes before linearized reading'  (duration: 271.637739ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T09:38:31.940471Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"283.142036ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:kube-dns\" limit:1 ","response":"range_response_count:1 size:576"}
	{"level":"info","ts":"2025-10-18T09:38:31.940560Z","caller":"traceutil/trace.go:172","msg":"trace[435765997] range","detail":"{range_begin:/registry/clusterroles/system:kube-dns; range_end:; response_count:1; response_revision:466; }","duration":"283.303681ms","start":"2025-10-18T09:38:31.657244Z","end":"2025-10-18T09:38:31.940548Z","steps":["trace[435765997] 'agreement among raft nodes before linearized reading'  (duration: 92.771258ms)","trace[435765997] 'range keys from in-memory index tree'  (duration: 190.288897ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T09:38:31.940808Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"190.66082ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5902136596450740732 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-pause-251981\" mod_revision:387 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-251981\" value_size:6744 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-251981\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-18T09:38:31.940933Z","caller":"traceutil/trace.go:172","msg":"trace[1672429614] transaction","detail":"{read_only:false; response_revision:467; number_of_response:1; }","duration":"287.786473ms","start":"2025-10-18T09:38:31.653137Z","end":"2025-10-18T09:38:31.940923Z","steps":["trace[1672429614] 'process raft request'  (duration: 96.959159ms)","trace[1672429614] 'compare'  (duration: 190.219894ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T09:38:37.635016Z","caller":"traceutil/trace.go:172","msg":"trace[1502992284] transaction","detail":"{read_only:false; response_revision:483; number_of_response:1; }","duration":"215.778349ms","start":"2025-10-18T09:38:37.419220Z","end":"2025-10-18T09:38:37.634998Z","steps":["trace[1502992284] 'process raft request'  (duration: 215.677471ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:38:37.860267Z","caller":"traceutil/trace.go:172","msg":"trace[1486154118] transaction","detail":"{read_only:false; response_revision:484; number_of_response:1; }","duration":"205.00964ms","start":"2025-10-18T09:38:37.655236Z","end":"2025-10-18T09:38:37.860245Z","steps":["trace[1486154118] 'process raft request'  (duration: 203.126044ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:38:46 up 2 min,  0 users,  load average: 1.08, 0.42, 0.16
	Linux pause-251981 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [4129f2037f9f283b2dcc43160d4a1cceb6aa28b44ee94c719f0b35e305d9b153] <==
	{"level":"warn","ts":"2025-10-18T09:38:15.642565Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00102a960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":86,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-10-18T09:38:15.666758Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00102a960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":87,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-10-18T09:38:15.690728Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00102a960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":88,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-10-18T09:38:15.715292Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00102a960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":89,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-10-18T09:38:15.739101Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00102a960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":90,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-10-18T09:38:15.764680Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00102a960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":91,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-10-18T09:38:15.789463Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00102a960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":92,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-10-18T09:38:15.814017Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00102a960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":93,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-10-18T09:38:15.840511Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00102a960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":94,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-10-18T09:38:15.865537Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00102a960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":95,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-10-18T09:38:15.892951Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00102a960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":96,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-10-18T09:38:15.918383Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00102a960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":97,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-10-18T09:38:15.945528Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00102a960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":98,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-10-18T09:38:15.970131Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00102a960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":99,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	E1018 09:38:15.970227       1 controller.go:97] Error removing old endpoints from kubernetes service: rpc error: code = Canceled desc = grpc: the client connection is closing
	W1018 09:38:16.184641       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1018 09:38:16.185084       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1018 09:38:17.183794       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1018 09:38:17.183798       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	E1018 09:38:18.184646       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1018 09:38:18.184874       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1018 09:38:19.184647       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1018 09:38:19.184672       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	W1018 09:38:20.184339       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1018 09:38:20.184575       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-apiserver [d50dc2290ce14d7255ceeee4d0319c02fbeb45ec099e5320cc8ed64adfe1f6ea] <==
	I1018 09:38:28.791297       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 09:38:28.791471       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:38:28.792780       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1018 09:38:28.792839       1 aggregator.go:171] initial CRD sync complete...
	I1018 09:38:28.792852       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 09:38:28.792861       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 09:38:28.792868       1 cache.go:39] Caches are synced for autoregister controller
	I1018 09:38:28.820524       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 09:38:28.820653       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 09:38:28.820914       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 09:38:28.821271       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 09:38:28.821297       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 09:38:28.822510       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 09:38:28.822681       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 09:38:28.826254       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 09:38:28.835198       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 09:38:29.643816       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:38:30.450520       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1018 09:38:32.259619       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.72.16]
	I1018 09:38:32.261353       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:38:32.267963       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:38:32.520916       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:38:32.575080       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 09:38:32.609579       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:38:32.617463       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [201cd2c1e158f4cd558f4e00ed2a29b45d3ac5d6699b526aa3173435fc58e0e7] <==
	I1018 09:38:11.249292       1 serving.go:386] Generated self-signed cert in-memory
	I1018 09:38:12.261219       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1018 09:38:12.261267       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:38:12.266329       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1018 09:38:12.267465       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1018 09:38:12.267652       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1018 09:38:12.267875       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [a752a128dd5424649f37cb4d8a0b6942b129d7c8521ee000cfef644d829c1465] <==
	I1018 09:38:34.057347       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 09:38:34.057618       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 09:38:34.057654       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 09:38:34.061582       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:38:34.062873       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 09:38:34.065114       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 09:38:34.066323       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 09:38:34.066396       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 09:38:34.071806       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 09:38:34.071864       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 09:38:34.072208       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 09:38:34.074163       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 09:38:34.075322       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 09:38:34.076688       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 09:38:34.076893       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 09:38:34.077285       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 09:38:34.081265       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 09:38:34.085115       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 09:38:34.094556       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 09:38:34.101387       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 09:38:34.122845       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1018 09:38:34.199969       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:38:34.199987       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 09:38:34.199993       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 09:38:34.224132       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [892b972e09f42c98bce80d50a7138fa3b1c7f065a30b709d83c34a1be5641266] <==
	I1018 09:38:31.655662       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:38:31.756632       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:38:31.756679       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.72.16"]
	E1018 09:38:31.756801       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:38:31.801631       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1018 09:38:31.801718       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1018 09:38:31.801758       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:38:31.812225       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:38:31.812747       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:38:31.812763       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:38:31.814355       1 config.go:200] "Starting service config controller"
	I1018 09:38:31.814385       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:38:31.814759       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:38:31.814793       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:38:31.814847       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:38:31.814864       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:38:31.820262       1 config.go:309] "Starting node config controller"
	I1018 09:38:31.821302       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:38:31.821355       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:38:31.915053       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:38:31.915260       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:38:31.915364       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [fc012585626724b3556d7a6649f793af6fb0753ed46c789bce7cb3ea391413bb] <==
	
	
	==> kube-scheduler [8a58eff517402787ff5f9c7a733a3863ce25d66a53c0f5e8fd09bd24cfa911d5] <==
	E1018 09:38:17.101612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.72.16:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.72.16:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 09:38:17.144677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.72.16:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.72.16:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 09:38:17.148380       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.72.16:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.72.16:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 09:38:17.205360       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.72.16:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.72.16:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 09:38:17.349079       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.72.16:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.72.16:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 09:38:17.351946       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.72.16:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.72.16:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 09:38:17.376590       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.72.16:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.72.16:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 09:38:17.689075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.72.16:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.72.16:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 09:38:17.849955       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.72.16:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.72.16:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 09:38:17.878745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.72.16:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.72.16:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 09:38:17.905776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.72.16:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.72.16:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 09:38:19.960245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.72.16:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.72.16:8443: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 09:38:20.378994       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.72.16:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.72.16:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 09:38:20.532330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.72.16:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.72.16:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 09:38:20.789176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.72.16:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.72.16:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 09:38:20.944763       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.72.16:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.72.16:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 09:38:21.439210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.72.16:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.72.16:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 09:38:21.456065       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	I1018 09:38:21.456504       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1018 09:38:21.456524       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1018 09:38:21.456573       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E1018 09:38:21.456596       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:38:21.456646       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:38:21.456650       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1018 09:38:21.456715       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [dcb2d4664bef892714e316cbc572e8176f8bfdd60aa897517075752efe53ee19] <==
	I1018 09:38:27.875011       1 serving.go:386] Generated self-signed cert in-memory
	W1018 09:38:28.736942       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 09:38:28.736977       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 09:38:28.736986       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 09:38:28.736993       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 09:38:28.783606       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 09:38:28.783654       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:38:28.788301       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 09:38:28.789881       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 09:38:28.789978       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:38:28.808021       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:38:28.908185       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:38:28 pause-251981 kubelet[3827]: I1018 09:38:28.794498    3827 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-251981"
	Oct 18 09:38:28 pause-251981 kubelet[3827]: E1018 09:38:28.823165    3827 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-251981\" already exists" pod="kube-system/etcd-pause-251981"
	Oct 18 09:38:28 pause-251981 kubelet[3827]: I1018 09:38:28.823221    3827 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-251981"
	Oct 18 09:38:28 pause-251981 kubelet[3827]: E1018 09:38:28.851025    3827 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-251981\" already exists" pod="kube-system/kube-apiserver-pause-251981"
	Oct 18 09:38:28 pause-251981 kubelet[3827]: I1018 09:38:28.851080    3827 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-251981"
	Oct 18 09:38:28 pause-251981 kubelet[3827]: I1018 09:38:28.854133    3827 kubelet_node_status.go:124] "Node was previously registered" node="pause-251981"
	Oct 18 09:38:28 pause-251981 kubelet[3827]: I1018 09:38:28.854260    3827 kubelet_node_status.go:78] "Successfully registered node" node="pause-251981"
	Oct 18 09:38:28 pause-251981 kubelet[3827]: I1018 09:38:28.854294    3827 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 18 09:38:28 pause-251981 kubelet[3827]: I1018 09:38:28.855687    3827 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 18 09:38:28 pause-251981 kubelet[3827]: E1018 09:38:28.872070    3827 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-251981\" already exists" pod="kube-system/kube-controller-manager-pause-251981"
	Oct 18 09:38:29 pause-251981 kubelet[3827]: I1018 09:38:29.068072    3827 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-251981"
	Oct 18 09:38:29 pause-251981 kubelet[3827]: I1018 09:38:29.068331    3827 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-251981"
	Oct 18 09:38:29 pause-251981 kubelet[3827]: E1018 09:38:29.084314    3827 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-251981\" already exists" pod="kube-system/kube-scheduler-pause-251981"
	Oct 18 09:38:29 pause-251981 kubelet[3827]: E1018 09:38:29.085591    3827 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-251981\" already exists" pod="kube-system/etcd-pause-251981"
	Oct 18 09:38:29 pause-251981 kubelet[3827]: I1018 09:38:29.681934    3827 apiserver.go:52] "Watching apiserver"
	Oct 18 09:38:29 pause-251981 kubelet[3827]: I1018 09:38:29.737277    3827 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 18 09:38:29 pause-251981 kubelet[3827]: I1018 09:38:29.759528    3827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91ff45f3-e63f-4bc3-8bf8-d805a6f89864-lib-modules\") pod \"kube-proxy-hh69n\" (UID: \"91ff45f3-e63f-4bc3-8bf8-d805a6f89864\") " pod="kube-system/kube-proxy-hh69n"
	Oct 18 09:38:29 pause-251981 kubelet[3827]: I1018 09:38:29.759642    3827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91ff45f3-e63f-4bc3-8bf8-d805a6f89864-xtables-lock\") pod \"kube-proxy-hh69n\" (UID: \"91ff45f3-e63f-4bc3-8bf8-d805a6f89864\") " pod="kube-system/kube-proxy-hh69n"
	Oct 18 09:38:30 pause-251981 kubelet[3827]: I1018 09:38:30.072310    3827 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-251981"
	Oct 18 09:38:30 pause-251981 kubelet[3827]: E1018 09:38:30.997853    3827 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-251981\" already exists" pod="kube-system/kube-scheduler-pause-251981"
	Oct 18 09:38:31 pause-251981 kubelet[3827]: I1018 09:38:31.189588    3827 scope.go:117] "RemoveContainer" containerID="fc012585626724b3556d7a6649f793af6fb0753ed46c789bce7cb3ea391413bb"
	Oct 18 09:38:34 pause-251981 kubelet[3827]: E1018 09:38:34.925051    3827 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760780314924211064  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 18 09:38:34 pause-251981 kubelet[3827]: E1018 09:38:34.925107    3827 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760780314924211064  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 18 09:38:44 pause-251981 kubelet[3827]: E1018 09:38:44.927865    3827 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760780324927234157  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 18 09:38:44 pause-251981 kubelet[3827]: E1018 09:38:44.927924    3827 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760780324927234157  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-251981 -n pause-251981
helpers_test.go:269: (dbg) Run:  kubectl --context pause-251981 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-251981 -n pause-251981
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-251981 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-251981 logs -n 25: (1.772366769s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬──────────
───────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                  ARGS                                                                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼──────────
───────────┼─────────────────────┤
	│ start   │ -p stopped-upgrade-253577 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                          │ stopped-upgrade-253577       │ jenkins │ v1.32.0 │ 18 Oct 25 09:35 UTC │ 18 Oct 25 09:36 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-947647 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                                                                             │ running-upgrade-947647       │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │                     │
	│ delete  │ -p running-upgrade-947647                                                                                                                                                                                                                                               │ running-upgrade-947647       │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │ 18 Oct 25 09:35 UTC │
	│ start   │ -p pause-251981 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                     │ pause-251981                 │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │ 18 Oct 25 09:37 UTC │
	│ ssh     │ -p NoKubernetes-914044 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                                                 │ NoKubernetes-914044          │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │                     │
	│ stop    │ -p NoKubernetes-914044                                                                                                                                                                                                                                                  │ NoKubernetes-914044          │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │ 18 Oct 25 09:35 UTC │
	│ start   │ -p NoKubernetes-914044 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                              │ NoKubernetes-914044          │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │ 18 Oct 25 09:36 UTC │
	│ stop    │ stopped-upgrade-253577 stop                                                                                                                                                                                                                                             │ stopped-upgrade-253577       │ jenkins │ v1.32.0 │ 18 Oct 25 09:36 UTC │ 18 Oct 25 09:36 UTC │
	│ start   │ -p stopped-upgrade-253577 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                      │ stopped-upgrade-253577       │ jenkins │ v1.37.0 │ 18 Oct 25 09:36 UTC │ 18 Oct 25 09:37 UTC │
	│ ssh     │ -p NoKubernetes-914044 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                                                 │ NoKubernetes-914044          │ jenkins │ v1.37.0 │ 18 Oct 25 09:36 UTC │                     │
	│ delete  │ -p NoKubernetes-914044                                                                                                                                                                                                                                                  │ NoKubernetes-914044          │ jenkins │ v1.37.0 │ 18 Oct 25 09:36 UTC │ 18 Oct 25 09:36 UTC │
	│ start   │ -p cert-expiration-209551 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                        │ cert-expiration-209551       │ jenkins │ v1.37.0 │ 18 Oct 25 09:36 UTC │ 18 Oct 25 09:37 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-253577 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                                                                             │ stopped-upgrade-253577       │ jenkins │ v1.37.0 │ 18 Oct 25 09:37 UTC │                     │
	│ delete  │ -p stopped-upgrade-253577                                                                                                                                                                                                                                               │ stopped-upgrade-253577       │ jenkins │ v1.37.0 │ 18 Oct 25 09:37 UTC │ 18 Oct 25 09:37 UTC │
	│ start   │ -p force-systemd-flag-850953 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                   │ force-systemd-flag-850953    │ jenkins │ v1.37.0 │ 18 Oct 25 09:37 UTC │ 18 Oct 25 09:38 UTC │
	│ start   │ -p pause-251981 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                              │ pause-251981                 │ jenkins │ v1.37.0 │ 18 Oct 25 09:37 UTC │ 18 Oct 25 09:38 UTC │
	│ delete  │ -p kubernetes-upgrade-178467                                                                                                                                                                                                                                            │ kubernetes-upgrade-178467    │ jenkins │ v1.37.0 │ 18 Oct 25 09:37 UTC │ 18 Oct 25 09:37 UTC │
	│ start   │ -p cert-options-586276 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                     │ cert-options-586276          │ jenkins │ v1.37.0 │ 18 Oct 25 09:37 UTC │ 18 Oct 25 09:38 UTC │
	│ ssh     │ force-systemd-flag-850953 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                                                    │ force-systemd-flag-850953    │ jenkins │ v1.37.0 │ 18 Oct 25 09:38 UTC │ 18 Oct 25 09:38 UTC │
	│ delete  │ -p force-systemd-flag-850953                                                                                                                                                                                                                                            │ force-systemd-flag-850953    │ jenkins │ v1.37.0 │ 18 Oct 25 09:38 UTC │ 18 Oct 25 09:38 UTC │
	│ start   │ -p old-k8s-version-874951 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0 │ old-k8s-version-874951       │ jenkins │ v1.37.0 │ 18 Oct 25 09:38 UTC │                     │
	│ ssh     │ cert-options-586276 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                                             │ cert-options-586276          │ jenkins │ v1.37.0 │ 18 Oct 25 09:38 UTC │ 18 Oct 25 09:38 UTC │
	│ ssh     │ -p cert-options-586276 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                                           │ cert-options-586276          │ jenkins │ v1.37.0 │ 18 Oct 25 09:38 UTC │ 18 Oct 25 09:38 UTC │
	│ delete  │ -p cert-options-586276                                                                                                                                                                                                                                                  │ cert-options-586276          │ jenkins │ v1.37.0 │ 18 Oct 25 09:38 UTC │ 18 Oct 25 09:38 UTC │
	│ start   │ -p default-k8s-diff-port-263234 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-263234 │ jenkins │ v1.37.0 │ 18 Oct 25 09:38 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴──────────
───────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:38:45
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:38:45.900437   51572 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:38:45.900719   51572 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:38:45.900729   51572 out.go:374] Setting ErrFile to fd 2...
	I1018 09:38:45.900735   51572 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:38:45.900995   51572 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-6063/.minikube/bin
	I1018 09:38:45.901575   51572 out.go:368] Setting JSON to false
	I1018 09:38:45.902578   51572 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4876,"bootTime":1760775450,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:38:45.902664   51572 start.go:141] virtualization: kvm guest
	I1018 09:38:45.904976   51572 out.go:179] * [default-k8s-diff-port-263234] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:38:45.906655   51572 notify.go:220] Checking for updates...
	I1018 09:38:45.906672   51572 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 09:38:45.908430   51572 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:38:45.910079   51572 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-6063/kubeconfig
	I1018 09:38:45.911596   51572 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-6063/.minikube
	I1018 09:38:45.915602   51572 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:38:45.917324   51572 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:38:45.919582   51572 config.go:182] Loaded profile config "cert-expiration-209551": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:38:45.919767   51572 config.go:182] Loaded profile config "old-k8s-version-874951": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 09:38:45.919969   51572 config.go:182] Loaded profile config "pause-251981": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:38:45.920096   51572 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:38:45.967458   51572 out.go:179] * Using the kvm2 driver based on user configuration
	I1018 09:38:45.968827   51572 start.go:305] selected driver: kvm2
	I1018 09:38:45.968844   51572 start.go:925] validating driver "kvm2" against <nil>
	I1018 09:38:45.968856   51572 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:38:45.969692   51572 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:38:45.969799   51572 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21767-6063/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 09:38:45.986293   51572 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 09:38:45.986351   51572 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21767-6063/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 09:38:46.002858   51572 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 09:38:46.002909   51572 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 09:38:46.003219   51572 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:38:46.003268   51572 cni.go:84] Creating CNI manager for ""
	I1018 09:38:46.003326   51572 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 09:38:46.003341   51572 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1018 09:38:46.003407   51572 start.go:349] cluster config:
	{Name:default-k8s-diff-port-263234 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-263234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cl
uster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:38:46.003531   51572 iso.go:125] acquiring lock: {Name:mk5e486e8f05c541fb7f7e8ec869cafc091f385a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:38:46.006226   51572 out.go:179] * Starting "default-k8s-diff-port-263234" primary control-plane node in "default-k8s-diff-port-263234" cluster
	I1018 09:38:46.007648   51572 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:38:46.007706   51572 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-6063/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 09:38:46.007721   51572 cache.go:58] Caching tarball of preloaded images
	I1018 09:38:46.007883   51572 preload.go:233] Found /home/jenkins/minikube-integration/21767-6063/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 09:38:46.007897   51572 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:38:46.008041   51572 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/default-k8s-diff-port-263234/config.json ...
	I1018 09:38:46.008076   51572 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/default-k8s-diff-port-263234/config.json: {Name:mk012684c0e9af39a589e6c0001ed0e9343dd7a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:38:46.008259   51572 start.go:360] acquireMachinesLock for default-k8s-diff-port-263234: {Name:mk264c321ec76ef9ad1eaece53fae2e5807c459a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1018 09:38:46.008315   51572 start.go:364] duration metric: took 31.91µs to acquireMachinesLock for "default-k8s-diff-port-263234"
	I1018 09:38:46.008342   51572 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-263234 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-263234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:38:46.008412   51572 start.go:125] createHost starting for "" (driver="kvm2")
	I1018 09:38:42.601317   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:42.602103   51011 main.go:141] libmachine: (old-k8s-version-874951) found domain IP: 192.168.83.158
	I1018 09:38:42.602138   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has current primary IP address 192.168.83.158 and MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:42.602147   51011 main.go:141] libmachine: (old-k8s-version-874951) reserving static IP address...
	I1018 09:38:42.602671   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-874951", mac: "52:54:00:e6:82:d2", ip: "192.168.83.158"} in network mk-old-k8s-version-874951
	I1018 09:38:42.856984   51011 main.go:141] libmachine: (old-k8s-version-874951) reserved static IP address 192.168.83.158 for domain old-k8s-version-874951
	I1018 09:38:42.857009   51011 main.go:141] libmachine: (old-k8s-version-874951) waiting for SSH...
	I1018 09:38:42.857028   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | Getting to WaitForSSH function...
	I1018 09:38:42.860637   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:42.861144   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:82:d2", ip: ""} in network mk-old-k8s-version-874951: {Iface:virbr1 ExpiryTime:2025-10-18 10:38:39 +0000 UTC Type:0 Mac:52:54:00:e6:82:d2 Iaid: IPaddr:192.168.83.158 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e6:82:d2}
	I1018 09:38:42.861177   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined IP address 192.168.83.158 and MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:42.861390   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | Using SSH client type: external
	I1018 09:38:42.861436   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | Using SSH private key: /home/jenkins/minikube-integration/21767-6063/.minikube/machines/old-k8s-version-874951/id_rsa (-rw-------)
	I1018 09:38:42.861471   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.158 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21767-6063/.minikube/machines/old-k8s-version-874951/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1018 09:38:42.861487   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | About to run SSH command:
	I1018 09:38:42.861504   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | exit 0
	I1018 09:38:42.999270   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | SSH cmd err, output: <nil>: 
	I1018 09:38:42.999567   51011 main.go:141] libmachine: (old-k8s-version-874951) domain creation complete
	I1018 09:38:43.000019   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetConfigRaw
	I1018 09:38:43.000675   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .DriverName
	I1018 09:38:43.000889   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .DriverName
	I1018 09:38:43.001120   51011 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1018 09:38:43.001135   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetState
	I1018 09:38:43.002773   51011 main.go:141] libmachine: Detecting operating system of created instance...
	I1018 09:38:43.002794   51011 main.go:141] libmachine: Waiting for SSH to be available...
	I1018 09:38:43.002804   51011 main.go:141] libmachine: Getting to WaitForSSH function...
	I1018 09:38:43.002813   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHHostname
	I1018 09:38:43.006497   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:43.007005   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:82:d2", ip: ""} in network mk-old-k8s-version-874951: {Iface:virbr1 ExpiryTime:2025-10-18 10:38:39 +0000 UTC Type:0 Mac:52:54:00:e6:82:d2 Iaid: IPaddr:192.168.83.158 Prefix:24 Hostname:old-k8s-version-874951 Clientid:01:52:54:00:e6:82:d2}
	I1018 09:38:43.007050   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined IP address 192.168.83.158 and MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:43.007581   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHPort
	I1018 09:38:43.007768   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHKeyPath
	I1018 09:38:43.007939   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHKeyPath
	I1018 09:38:43.008130   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHUsername
	I1018 09:38:43.008325   51011 main.go:141] libmachine: Using SSH client type: native
	I1018 09:38:43.008542   51011 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.83.158 22 <nil> <nil>}
	I1018 09:38:43.008553   51011 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1018 09:38:43.122267   51011 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:38:43.122296   51011 main.go:141] libmachine: Detecting the provisioner...
	I1018 09:38:43.122307   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHHostname
	I1018 09:38:43.127232   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:43.127954   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:82:d2", ip: ""} in network mk-old-k8s-version-874951: {Iface:virbr1 ExpiryTime:2025-10-18 10:38:39 +0000 UTC Type:0 Mac:52:54:00:e6:82:d2 Iaid: IPaddr:192.168.83.158 Prefix:24 Hostname:old-k8s-version-874951 Clientid:01:52:54:00:e6:82:d2}
	I1018 09:38:43.128013   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined IP address 192.168.83.158 and MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:43.128318   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHPort
	I1018 09:38:43.128568   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHKeyPath
	I1018 09:38:43.128759   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHKeyPath
	I1018 09:38:43.129016   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHUsername
	I1018 09:38:43.129291   51011 main.go:141] libmachine: Using SSH client type: native
	I1018 09:38:43.129515   51011 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.83.158 22 <nil> <nil>}
	I1018 09:38:43.129528   51011 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1018 09:38:43.244131   51011 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1018 09:38:43.244261   51011 main.go:141] libmachine: found compatible host: buildroot
	I1018 09:38:43.244279   51011 main.go:141] libmachine: Provisioning with buildroot...
	I1018 09:38:43.244290   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetMachineName
	I1018 09:38:43.244638   51011 buildroot.go:166] provisioning hostname "old-k8s-version-874951"
	I1018 09:38:43.244673   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetMachineName
	I1018 09:38:43.244844   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHHostname
	I1018 09:38:43.248262   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:43.248757   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:82:d2", ip: ""} in network mk-old-k8s-version-874951: {Iface:virbr1 ExpiryTime:2025-10-18 10:38:39 +0000 UTC Type:0 Mac:52:54:00:e6:82:d2 Iaid: IPaddr:192.168.83.158 Prefix:24 Hostname:old-k8s-version-874951 Clientid:01:52:54:00:e6:82:d2}
	I1018 09:38:43.248789   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined IP address 192.168.83.158 and MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:43.248971   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHPort
	I1018 09:38:43.249197   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHKeyPath
	I1018 09:38:43.249374   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHKeyPath
	I1018 09:38:43.249516   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHUsername
	I1018 09:38:43.249696   51011 main.go:141] libmachine: Using SSH client type: native
	I1018 09:38:43.250005   51011 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.83.158 22 <nil> <nil>}
	I1018 09:38:43.250034   51011 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-874951 && echo "old-k8s-version-874951" | sudo tee /etc/hostname
	I1018 09:38:43.379573   51011 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-874951
	
	I1018 09:38:43.379605   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHHostname
	I1018 09:38:43.383183   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:43.383684   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:82:d2", ip: ""} in network mk-old-k8s-version-874951: {Iface:virbr1 ExpiryTime:2025-10-18 10:38:39 +0000 UTC Type:0 Mac:52:54:00:e6:82:d2 Iaid: IPaddr:192.168.83.158 Prefix:24 Hostname:old-k8s-version-874951 Clientid:01:52:54:00:e6:82:d2}
	I1018 09:38:43.383725   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined IP address 192.168.83.158 and MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:43.383905   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHPort
	I1018 09:38:43.384167   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHKeyPath
	I1018 09:38:43.384412   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHKeyPath
	I1018 09:38:43.384621   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHUsername
	I1018 09:38:43.384875   51011 main.go:141] libmachine: Using SSH client type: native
	I1018 09:38:43.385203   51011 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.83.158 22 <nil> <nil>}
	I1018 09:38:43.385239   51011 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-874951' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-874951/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-874951' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:38:43.508210   51011 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:38:43.508246   51011 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21767-6063/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-6063/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-6063/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-6063/.minikube}
	I1018 09:38:43.508289   51011 buildroot.go:174] setting up certificates
	I1018 09:38:43.508302   51011 provision.go:84] configureAuth start
	I1018 09:38:43.508316   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetMachineName
	I1018 09:38:43.508616   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetIP
	I1018 09:38:43.512577   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:43.513131   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:82:d2", ip: ""} in network mk-old-k8s-version-874951: {Iface:virbr1 ExpiryTime:2025-10-18 10:38:39 +0000 UTC Type:0 Mac:52:54:00:e6:82:d2 Iaid: IPaddr:192.168.83.158 Prefix:24 Hostname:old-k8s-version-874951 Clientid:01:52:54:00:e6:82:d2}
	I1018 09:38:43.513165   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined IP address 192.168.83.158 and MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:43.513388   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHHostname
	I1018 09:38:43.516641   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:43.517168   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:82:d2", ip: ""} in network mk-old-k8s-version-874951: {Iface:virbr1 ExpiryTime:2025-10-18 10:38:39 +0000 UTC Type:0 Mac:52:54:00:e6:82:d2 Iaid: IPaddr:192.168.83.158 Prefix:24 Hostname:old-k8s-version-874951 Clientid:01:52:54:00:e6:82:d2}
	I1018 09:38:43.517219   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined IP address 192.168.83.158 and MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:43.517457   51011 provision.go:143] copyHostCerts
	I1018 09:38:43.517549   51011 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-6063/.minikube/ca.pem, removing ...
	I1018 09:38:43.517576   51011 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-6063/.minikube/ca.pem
	I1018 09:38:43.517668   51011 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-6063/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-6063/.minikube/ca.pem (1078 bytes)
	I1018 09:38:43.517835   51011 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-6063/.minikube/cert.pem, removing ...
	I1018 09:38:43.517853   51011 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-6063/.minikube/cert.pem
	I1018 09:38:43.517908   51011 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-6063/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-6063/.minikube/cert.pem (1123 bytes)
	I1018 09:38:43.518064   51011 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-6063/.minikube/key.pem, removing ...
	I1018 09:38:43.518081   51011 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-6063/.minikube/key.pem
	I1018 09:38:43.518132   51011 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-6063/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-6063/.minikube/key.pem (1675 bytes)
	I1018 09:38:43.518257   51011 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-6063/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-6063/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-6063/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-874951 san=[127.0.0.1 192.168.83.158 localhost minikube old-k8s-version-874951]
	I1018 09:38:43.718062   51011 provision.go:177] copyRemoteCerts
	I1018 09:38:43.718136   51011 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:38:43.718175   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHHostname
	I1018 09:38:43.721750   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:43.722204   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:82:d2", ip: ""} in network mk-old-k8s-version-874951: {Iface:virbr1 ExpiryTime:2025-10-18 10:38:39 +0000 UTC Type:0 Mac:52:54:00:e6:82:d2 Iaid: IPaddr:192.168.83.158 Prefix:24 Hostname:old-k8s-version-874951 Clientid:01:52:54:00:e6:82:d2}
	I1018 09:38:43.722244   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined IP address 192.168.83.158 and MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:43.722512   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHPort
	I1018 09:38:43.722780   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHKeyPath
	I1018 09:38:43.723005   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHUsername
	I1018 09:38:43.723171   51011 sshutil.go:53] new ssh client: &{IP:192.168.83.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/old-k8s-version-874951/id_rsa Username:docker}
	I1018 09:38:43.809447   51011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:38:43.852248   51011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1018 09:38:43.894377   51011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 09:38:43.938498   51011 provision.go:87] duration metric: took 430.178847ms to configureAuth
	I1018 09:38:43.938536   51011 buildroot.go:189] setting minikube options for container-runtime
	I1018 09:38:43.938760   51011 config.go:182] Loaded profile config "old-k8s-version-874951": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 09:38:43.938866   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHHostname
	I1018 09:38:43.942538   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:43.942993   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:82:d2", ip: ""} in network mk-old-k8s-version-874951: {Iface:virbr1 ExpiryTime:2025-10-18 10:38:39 +0000 UTC Type:0 Mac:52:54:00:e6:82:d2 Iaid: IPaddr:192.168.83.158 Prefix:24 Hostname:old-k8s-version-874951 Clientid:01:52:54:00:e6:82:d2}
	I1018 09:38:43.943026   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined IP address 192.168.83.158 and MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:43.943321   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHPort
	I1018 09:38:43.943566   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHKeyPath
	I1018 09:38:43.943783   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHKeyPath
	I1018 09:38:43.943973   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHUsername
	I1018 09:38:43.944237   51011 main.go:141] libmachine: Using SSH client type: native
	I1018 09:38:43.944539   51011 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.83.158 22 <nil> <nil>}
	I1018 09:38:43.944564   51011 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:38:44.241792   51011 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:38:44.241827   51011 main.go:141] libmachine: Checking connection to Docker...
	I1018 09:38:44.241865   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetURL
	I1018 09:38:44.243748   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | using libvirt version 8000000
	I1018 09:38:44.247612   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:44.248067   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:82:d2", ip: ""} in network mk-old-k8s-version-874951: {Iface:virbr1 ExpiryTime:2025-10-18 10:38:39 +0000 UTC Type:0 Mac:52:54:00:e6:82:d2 Iaid: IPaddr:192.168.83.158 Prefix:24 Hostname:old-k8s-version-874951 Clientid:01:52:54:00:e6:82:d2}
	I1018 09:38:44.248097   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined IP address 192.168.83.158 and MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:44.248319   51011 main.go:141] libmachine: Docker is up and running!
	I1018 09:38:44.248333   51011 main.go:141] libmachine: Reticulating splines...
	I1018 09:38:44.248340   51011 client.go:171] duration metric: took 21.230035616s to LocalClient.Create
	I1018 09:38:44.248366   51011 start.go:167] duration metric: took 21.23010891s to libmachine.API.Create "old-k8s-version-874951"
	I1018 09:38:44.248390   51011 start.go:293] postStartSetup for "old-k8s-version-874951" (driver="kvm2")
	I1018 09:38:44.248414   51011 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:38:44.248435   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .DriverName
	I1018 09:38:44.248729   51011 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:38:44.248764   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHHostname
	I1018 09:38:44.251511   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:44.251958   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:82:d2", ip: ""} in network mk-old-k8s-version-874951: {Iface:virbr1 ExpiryTime:2025-10-18 10:38:39 +0000 UTC Type:0 Mac:52:54:00:e6:82:d2 Iaid: IPaddr:192.168.83.158 Prefix:24 Hostname:old-k8s-version-874951 Clientid:01:52:54:00:e6:82:d2}
	I1018 09:38:44.251995   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined IP address 192.168.83.158 and MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:44.252202   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHPort
	I1018 09:38:44.252404   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHKeyPath
	I1018 09:38:44.252555   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHUsername
	I1018 09:38:44.252741   51011 sshutil.go:53] new ssh client: &{IP:192.168.83.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/old-k8s-version-874951/id_rsa Username:docker}
	I1018 09:38:44.348440   51011 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:38:44.354378   51011 info.go:137] Remote host: Buildroot 2025.02
	I1018 09:38:44.354409   51011 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-6063/.minikube/addons for local assets ...
	I1018 09:38:44.354529   51011 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-6063/.minikube/files for local assets ...
	I1018 09:38:44.354635   51011 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-6063/.minikube/files/etc/ssl/certs/99562.pem -> 99562.pem in /etc/ssl/certs
	I1018 09:38:44.354768   51011 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:38:44.370519   51011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/files/etc/ssl/certs/99562.pem --> /etc/ssl/certs/99562.pem (1708 bytes)
	I1018 09:38:44.408033   51011 start.go:296] duration metric: took 159.629149ms for postStartSetup
	I1018 09:38:44.408110   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetConfigRaw
	I1018 09:38:44.408901   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetIP
	I1018 09:38:44.412255   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:44.412747   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:82:d2", ip: ""} in network mk-old-k8s-version-874951: {Iface:virbr1 ExpiryTime:2025-10-18 10:38:39 +0000 UTC Type:0 Mac:52:54:00:e6:82:d2 Iaid: IPaddr:192.168.83.158 Prefix:24 Hostname:old-k8s-version-874951 Clientid:01:52:54:00:e6:82:d2}
	I1018 09:38:44.412770   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined IP address 192.168.83.158 and MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:44.413148   51011 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/old-k8s-version-874951/config.json ...
	I1018 09:38:44.413500   51011 start.go:128] duration metric: took 21.417510346s to createHost
	I1018 09:38:44.413545   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHHostname
	I1018 09:38:44.417799   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:44.418414   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:82:d2", ip: ""} in network mk-old-k8s-version-874951: {Iface:virbr1 ExpiryTime:2025-10-18 10:38:39 +0000 UTC Type:0 Mac:52:54:00:e6:82:d2 Iaid: IPaddr:192.168.83.158 Prefix:24 Hostname:old-k8s-version-874951 Clientid:01:52:54:00:e6:82:d2}
	I1018 09:38:44.418446   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined IP address 192.168.83.158 and MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:44.418733   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHPort
	I1018 09:38:44.419001   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHKeyPath
	I1018 09:38:44.419246   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHKeyPath
	I1018 09:38:44.419486   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHUsername
	I1018 09:38:44.419671   51011 main.go:141] libmachine: Using SSH client type: native
	I1018 09:38:44.419999   51011 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.83.158 22 <nil> <nil>}
	I1018 09:38:44.420013   51011 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1018 09:38:44.534229   51011 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760780324.491691809
	
	I1018 09:38:44.534252   51011 fix.go:216] guest clock: 1760780324.491691809
	I1018 09:38:44.534271   51011 fix.go:229] Guest: 2025-10-18 09:38:44.491691809 +0000 UTC Remote: 2025-10-18 09:38:44.413520307 +0000 UTC m=+27.716148760 (delta=78.171502ms)
	I1018 09:38:44.534324   51011 fix.go:200] guest clock delta is within tolerance: 78.171502ms
	I1018 09:38:44.534335   51011 start.go:83] releasing machines lock for "old-k8s-version-874951", held for 21.538497859s
	I1018 09:38:44.534361   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .DriverName
	I1018 09:38:44.534672   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetIP
	I1018 09:38:44.538519   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:44.539017   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:82:d2", ip: ""} in network mk-old-k8s-version-874951: {Iface:virbr1 ExpiryTime:2025-10-18 10:38:39 +0000 UTC Type:0 Mac:52:54:00:e6:82:d2 Iaid: IPaddr:192.168.83.158 Prefix:24 Hostname:old-k8s-version-874951 Clientid:01:52:54:00:e6:82:d2}
	I1018 09:38:44.539062   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined IP address 192.168.83.158 and MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:44.539304   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .DriverName
	I1018 09:38:44.540110   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .DriverName
	I1018 09:38:44.540319   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .DriverName
	I1018 09:38:44.540430   51011 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:38:44.540479   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHHostname
	I1018 09:38:44.540626   51011 ssh_runner.go:195] Run: cat /version.json
	I1018 09:38:44.540651   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHHostname
	I1018 09:38:44.544187   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:44.544225   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:44.544705   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:82:d2", ip: ""} in network mk-old-k8s-version-874951: {Iface:virbr1 ExpiryTime:2025-10-18 10:38:39 +0000 UTC Type:0 Mac:52:54:00:e6:82:d2 Iaid: IPaddr:192.168.83.158 Prefix:24 Hostname:old-k8s-version-874951 Clientid:01:52:54:00:e6:82:d2}
	I1018 09:38:44.544737   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined IP address 192.168.83.158 and MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:44.544904   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:82:d2", ip: ""} in network mk-old-k8s-version-874951: {Iface:virbr1 ExpiryTime:2025-10-18 10:38:39 +0000 UTC Type:0 Mac:52:54:00:e6:82:d2 Iaid: IPaddr:192.168.83.158 Prefix:24 Hostname:old-k8s-version-874951 Clientid:01:52:54:00:e6:82:d2}
	I1018 09:38:44.544955   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined IP address 192.168.83.158 and MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:44.545111   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHPort
	I1018 09:38:44.545231   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHPort
	I1018 09:38:44.545351   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHKeyPath
	I1018 09:38:44.545424   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHKeyPath
	I1018 09:38:44.545518   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHUsername
	I1018 09:38:44.545591   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetSSHUsername
	I1018 09:38:44.545687   51011 sshutil.go:53] new ssh client: &{IP:192.168.83.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/old-k8s-version-874951/id_rsa Username:docker}
	I1018 09:38:44.545906   51011 sshutil.go:53] new ssh client: &{IP:192.168.83.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/old-k8s-version-874951/id_rsa Username:docker}
	I1018 09:38:44.631702   51011 ssh_runner.go:195] Run: systemctl --version
	I1018 09:38:44.661086   51011 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:38:44.829783   51011 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:38:44.837651   51011 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:38:44.837732   51011 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:38:44.861404   51011 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 09:38:44.861430   51011 start.go:495] detecting cgroup driver to use...
	I1018 09:38:44.861528   51011 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:38:44.882808   51011 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:38:44.909409   51011 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:38:44.909482   51011 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:38:44.935390   51011 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:38:44.957052   51011 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:38:45.152336   51011 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:38:45.419561   51011 docker.go:234] disabling docker service ...
	I1018 09:38:45.419660   51011 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:38:45.438811   51011 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:38:45.458496   51011 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:38:45.674213   51011 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:38:45.837114   51011 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:38:45.860070   51011 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:38:45.892975   51011 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1018 09:38:45.893062   51011 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:38:45.908287   51011 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 09:38:45.908369   51011 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:38:45.925276   51011 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:38:45.945708   51011 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:38:45.963703   51011 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:38:45.978912   51011 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:38:45.994132   51011 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:38:46.022162   51011 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:38:46.038856   51011 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:38:46.054950   51011 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1018 09:38:46.055009   51011 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1018 09:38:46.077504   51011 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:38:46.095002   51011 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:38:46.267767   51011 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:38:46.419331   51011 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:38:46.419418   51011 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:38:46.425824   51011 start.go:563] Will wait 60s for crictl version
	I1018 09:38:46.425905   51011 ssh_runner.go:195] Run: which crictl
	I1018 09:38:46.430531   51011 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1018 09:38:46.475212   51011 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1018 09:38:46.475322   51011 ssh_runner.go:195] Run: crio --version
	I1018 09:38:46.514640   51011 ssh_runner.go:195] Run: crio --version
	I1018 09:38:46.558450   51011 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.29.1 ...
	I1018 09:38:46.559841   51011 main.go:141] libmachine: (old-k8s-version-874951) Calling .GetIP
	I1018 09:38:46.564290   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:46.564806   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:82:d2", ip: ""} in network mk-old-k8s-version-874951: {Iface:virbr1 ExpiryTime:2025-10-18 10:38:39 +0000 UTC Type:0 Mac:52:54:00:e6:82:d2 Iaid: IPaddr:192.168.83.158 Prefix:24 Hostname:old-k8s-version-874951 Clientid:01:52:54:00:e6:82:d2}
	I1018 09:38:46.564837   51011 main.go:141] libmachine: (old-k8s-version-874951) DBG | domain old-k8s-version-874951 has defined IP address 192.168.83.158 and MAC address 52:54:00:e6:82:d2 in network mk-old-k8s-version-874951
	I1018 09:38:46.565199   51011 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1018 09:38:46.571087   51011 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:38:46.589543   51011 kubeadm.go:883] updating cluster {Name:old-k8s-version-874951 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.0 ClusterName:old-k8s-version-874951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.158 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:38:46.589696   51011 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 09:38:46.589772   51011 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:38:46.641154   51011 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.0". assuming images are not preloaded.
	I1018 09:38:46.641260   51011 ssh_runner.go:195] Run: which lz4
	I1018 09:38:46.646187   51011 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1018 09:38:46.651617   51011 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1018 09:38:46.651662   51011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-6063/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457056555 bytes)
	
	
	==> CRI-O <==
	Oct 18 09:38:48 pause-251981 crio[2808]: time="2025-10-18 09:38:48.765199226Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760780328765172352,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ceb9275b-34b8-49d8-9709-b4031ede893e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 09:38:48 pause-251981 crio[2808]: time="2025-10-18 09:38:48.765794620Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=be65d0bc-cb3b-44a2-84f2-ec0aa245520b name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:38:48 pause-251981 crio[2808]: time="2025-10-18 09:38:48.765884259Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=be65d0bc-cb3b-44a2-84f2-ec0aa245520b name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:38:48 pause-251981 crio[2808]: time="2025-10-18 09:38:48.766472921Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:892b972e09f42c98bce80d50a7138fa3b1c7f065a30b709d83c34a1be5641266,PodSandboxId:bc13c2072329b53d87a66bc042d13c327338a7265c42fa170c387eaf90aca7a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760780311203743281,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hh69n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91ff45f3-e63f-4bc3-8bf8-d805a6f89864,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e627267a6b15f9dab3004645432031a2cb9b36cc81e305dd7962c2f8fd477595,PodSandboxId:76e5fecbb9a139ab265bddec8feb0944f10440e52f9cb7a494a3d6c700e132b2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760780305439137500,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ef8d22f0acfad161fd7159db2ab3aaa,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kuberne
tes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcb2d4664bef892714e316cbc572e8176f8bfdd60aa897517075752efe53ee19,PodSandboxId:49112bf07bb4fab709b8c109c39bbc8b51964b95fb8c72dc739bfa5360780e21,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760780305454024396,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896068ef5175d9af2bc27f8f789b5ff4,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.port
s: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a752a128dd5424649f37cb4d8a0b6942b129d7c8521ee000cfef644d829c1465,PodSandboxId:79390eaf4de1648fee543465081e6e27295387e9e5f82a618f39487d384809da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760780305478293962,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
796b7d70b0f8a722cf83fe465c4b2017,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d50dc2290ce14d7255ceeee4d0319c02fbeb45ec099e5320cc8ed64adfe1f6ea,PodSandboxId:c7058358f880617efa02f2d27e9303284b35433df535d99b1b5187a7072d7f5b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760780305397155654,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernet
es.pod.name: kube-apiserver-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58baf79f62aa5f6561d388f3289f8931,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:491f718eb22fdfde908afbfc80efb672e3b0add928d31a9889c118be7ccb2c74,PodSandboxId:44f3a3c9e0095aadfac61abe79a1b3390b096d88e35c84458b01cd930d104b69,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17607
80290010005371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gkqrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80039a0f-d663-4568-85a8-f35ea7394b79,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc012585626724b3556d7a6649f793af6fb0753ed46c789bce7cb3ea391413bb,PodSandboxId:bc13c2072329b53d87a66bc042d13c327338a7265c42fa
170c387eaf90aca7a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760780289299644884,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hh69n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91ff45f3-e63f-4bc3-8bf8-d805a6f89864,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4129f2037f9f283b2dcc43160d4a1cceb6aa28b44ee94c719f0b35e305d9b153,PodSandboxId:c7058358f880617efa02f2d27e9303284b35433df535d99b1b5187a7072d7f5b,Metadata:&Conta
inerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760780289171309478,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58baf79f62aa5f6561d388f3289f8931,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:201cd2c1e158f4cd558f4e00ed2a29b45d3ac5d6699b526aa3173435
fc58e0e7,PodSandboxId:79390eaf4de1648fee543465081e6e27295387e9e5f82a618f39487d384809da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760780289121980073,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 796b7d70b0f8a722cf83fe465c4b2017,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02ee425859eef16741c99f5436243e9681dfb3da34b4721472f7361d29ba47dc,PodSandboxId:76e5fecbb9a139ab265bddec8feb0944f10440e52f9cb7a494a3d6c700e132b2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760780289080277787,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ef8d22f0acfad161fd7159db2ab3aaa,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a58eff517402787ff5f9c7a733a3863ce25d66a53c0f5e8fd09bd24cfa911d5,PodSandboxId:49112bf07bb4fab709b8c109c39bbc8b51964b95fb8c72dc739bfa5360780e21,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760780289074322882,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896068ef5175d9af2bc27f8f789b5ff4,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259
,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2a9247b81b38d66905d17b1c7125eaaca22be0c11e84329c01545bf8a63d3f7,PodSandboxId:bb01f32caae719bbc3469508464a92bb8a47233ccc330705fa6ae20a98e3a7a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760780224492759843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gkqrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80039a0f-d663-4568-85a8-f35ea7394b79,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubern
etes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=be65d0bc-cb3b-44a2-84f2-ec0aa245520b name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:38:48 pause-251981 crio[2808]: time="2025-10-18 09:38:48.841181729Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=26338697-05ba-443f-bde1-ae8cfb7be886 name=/runtime.v1.RuntimeService/Version
	Oct 18 09:38:48 pause-251981 crio[2808]: time="2025-10-18 09:38:48.841294772Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=26338697-05ba-443f-bde1-ae8cfb7be886 name=/runtime.v1.RuntimeService/Version
	Oct 18 09:38:48 pause-251981 crio[2808]: time="2025-10-18 09:38:48.842907044Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fdef27cd-ee3b-4506-9680-a56cbb72a2e0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 09:38:48 pause-251981 crio[2808]: time="2025-10-18 09:38:48.843621346Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760780328843585968,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fdef27cd-ee3b-4506-9680-a56cbb72a2e0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 09:38:48 pause-251981 crio[2808]: time="2025-10-18 09:38:48.844376707Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d84906a5-9e92-45fb-8c0b-21f7362832e6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:38:48 pause-251981 crio[2808]: time="2025-10-18 09:38:48.844548560Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d84906a5-9e92-45fb-8c0b-21f7362832e6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:38:48 pause-251981 crio[2808]: time="2025-10-18 09:38:48.844997294Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:892b972e09f42c98bce80d50a7138fa3b1c7f065a30b709d83c34a1be5641266,PodSandboxId:bc13c2072329b53d87a66bc042d13c327338a7265c42fa170c387eaf90aca7a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760780311203743281,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hh69n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91ff45f3-e63f-4bc3-8bf8-d805a6f89864,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e627267a6b15f9dab3004645432031a2cb9b36cc81e305dd7962c2f8fd477595,PodSandboxId:76e5fecbb9a139ab265bddec8feb0944f10440e52f9cb7a494a3d6c700e132b2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760780305439137500,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ef8d22f0acfad161fd7159db2ab3aaa,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kuberne
tes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcb2d4664bef892714e316cbc572e8176f8bfdd60aa897517075752efe53ee19,PodSandboxId:49112bf07bb4fab709b8c109c39bbc8b51964b95fb8c72dc739bfa5360780e21,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760780305454024396,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896068ef5175d9af2bc27f8f789b5ff4,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.port
s: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a752a128dd5424649f37cb4d8a0b6942b129d7c8521ee000cfef644d829c1465,PodSandboxId:79390eaf4de1648fee543465081e6e27295387e9e5f82a618f39487d384809da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760780305478293962,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
796b7d70b0f8a722cf83fe465c4b2017,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d50dc2290ce14d7255ceeee4d0319c02fbeb45ec099e5320cc8ed64adfe1f6ea,PodSandboxId:c7058358f880617efa02f2d27e9303284b35433df535d99b1b5187a7072d7f5b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760780305397155654,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernet
es.pod.name: kube-apiserver-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58baf79f62aa5f6561d388f3289f8931,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:491f718eb22fdfde908afbfc80efb672e3b0add928d31a9889c118be7ccb2c74,PodSandboxId:44f3a3c9e0095aadfac61abe79a1b3390b096d88e35c84458b01cd930d104b69,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17607
80290010005371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gkqrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80039a0f-d663-4568-85a8-f35ea7394b79,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc012585626724b3556d7a6649f793af6fb0753ed46c789bce7cb3ea391413bb,PodSandboxId:bc13c2072329b53d87a66bc042d13c327338a7265c42fa
170c387eaf90aca7a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760780289299644884,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hh69n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91ff45f3-e63f-4bc3-8bf8-d805a6f89864,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4129f2037f9f283b2dcc43160d4a1cceb6aa28b44ee94c719f0b35e305d9b153,PodSandboxId:c7058358f880617efa02f2d27e9303284b35433df535d99b1b5187a7072d7f5b,Metadata:&Conta
inerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760780289171309478,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58baf79f62aa5f6561d388f3289f8931,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:201cd2c1e158f4cd558f4e00ed2a29b45d3ac5d6699b526aa3173435
fc58e0e7,PodSandboxId:79390eaf4de1648fee543465081e6e27295387e9e5f82a618f39487d384809da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760780289121980073,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 796b7d70b0f8a722cf83fe465c4b2017,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02ee425859eef16741c99f5436243e9681dfb3da34b4721472f7361d29ba47dc,PodSandboxId:76e5fecbb9a139ab265bddec8feb0944f10440e52f9cb7a494a3d6c700e132b2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760780289080277787,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ef8d22f0acfad161fd7159db2ab3aaa,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a58eff517402787ff5f9c7a733a3863ce25d66a53c0f5e8fd09bd24cfa911d5,PodSandboxId:49112bf07bb4fab709b8c109c39bbc8b51964b95fb8c72dc739bfa5360780e21,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760780289074322882,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896068ef5175d9af2bc27f8f789b5ff4,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259
,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2a9247b81b38d66905d17b1c7125eaaca22be0c11e84329c01545bf8a63d3f7,PodSandboxId:bb01f32caae719bbc3469508464a92bb8a47233ccc330705fa6ae20a98e3a7a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760780224492759843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gkqrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80039a0f-d663-4568-85a8-f35ea7394b79,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubern
etes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d84906a5-9e92-45fb-8c0b-21f7362832e6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:38:48 pause-251981 crio[2808]: time="2025-10-18 09:38:48.914250506Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=abf8215e-a555-4207-b72e-5ee35b7ca90b name=/runtime.v1.RuntimeService/Version
	Oct 18 09:38:48 pause-251981 crio[2808]: time="2025-10-18 09:38:48.914536237Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=abf8215e-a555-4207-b72e-5ee35b7ca90b name=/runtime.v1.RuntimeService/Version
	Oct 18 09:38:48 pause-251981 crio[2808]: time="2025-10-18 09:38:48.917269761Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f9c2d7ad-be11-4b2d-b7a1-3e1f48b295a4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 09:38:48 pause-251981 crio[2808]: time="2025-10-18 09:38:48.918504160Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760780328918329795,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f9c2d7ad-be11-4b2d-b7a1-3e1f48b295a4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 09:38:48 pause-251981 crio[2808]: time="2025-10-18 09:38:48.919838290Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8efdbf89-8d69-4412-864c-6db98b23c34c name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:38:48 pause-251981 crio[2808]: time="2025-10-18 09:38:48.920073024Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8efdbf89-8d69-4412-864c-6db98b23c34c name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:38:48 pause-251981 crio[2808]: time="2025-10-18 09:38:48.920369611Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:892b972e09f42c98bce80d50a7138fa3b1c7f065a30b709d83c34a1be5641266,PodSandboxId:bc13c2072329b53d87a66bc042d13c327338a7265c42fa170c387eaf90aca7a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760780311203743281,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hh69n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91ff45f3-e63f-4bc3-8bf8-d805a6f89864,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e627267a6b15f9dab3004645432031a2cb9b36cc81e305dd7962c2f8fd477595,PodSandboxId:76e5fecbb9a139ab265bddec8feb0944f10440e52f9cb7a494a3d6c700e132b2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760780305439137500,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ef8d22f0acfad161fd7159db2ab3aaa,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kuberne
tes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcb2d4664bef892714e316cbc572e8176f8bfdd60aa897517075752efe53ee19,PodSandboxId:49112bf07bb4fab709b8c109c39bbc8b51964b95fb8c72dc739bfa5360780e21,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760780305454024396,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896068ef5175d9af2bc27f8f789b5ff4,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.port
s: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a752a128dd5424649f37cb4d8a0b6942b129d7c8521ee000cfef644d829c1465,PodSandboxId:79390eaf4de1648fee543465081e6e27295387e9e5f82a618f39487d384809da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760780305478293962,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
796b7d70b0f8a722cf83fe465c4b2017,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d50dc2290ce14d7255ceeee4d0319c02fbeb45ec099e5320cc8ed64adfe1f6ea,PodSandboxId:c7058358f880617efa02f2d27e9303284b35433df535d99b1b5187a7072d7f5b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760780305397155654,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernet
es.pod.name: kube-apiserver-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58baf79f62aa5f6561d388f3289f8931,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:491f718eb22fdfde908afbfc80efb672e3b0add928d31a9889c118be7ccb2c74,PodSandboxId:44f3a3c9e0095aadfac61abe79a1b3390b096d88e35c84458b01cd930d104b69,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17607
80290010005371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gkqrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80039a0f-d663-4568-85a8-f35ea7394b79,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc012585626724b3556d7a6649f793af6fb0753ed46c789bce7cb3ea391413bb,PodSandboxId:bc13c2072329b53d87a66bc042d13c327338a7265c42fa
170c387eaf90aca7a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760780289299644884,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hh69n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91ff45f3-e63f-4bc3-8bf8-d805a6f89864,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4129f2037f9f283b2dcc43160d4a1cceb6aa28b44ee94c719f0b35e305d9b153,PodSandboxId:c7058358f880617efa02f2d27e9303284b35433df535d99b1b5187a7072d7f5b,Metadata:&Conta
inerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760780289171309478,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58baf79f62aa5f6561d388f3289f8931,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:201cd2c1e158f4cd558f4e00ed2a29b45d3ac5d6699b526aa3173435
fc58e0e7,PodSandboxId:79390eaf4de1648fee543465081e6e27295387e9e5f82a618f39487d384809da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760780289121980073,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 796b7d70b0f8a722cf83fe465c4b2017,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02ee425859eef16741c99f5436243e9681dfb3da34b4721472f7361d29ba47dc,PodSandboxId:76e5fecbb9a139ab265bddec8feb0944f10440e52f9cb7a494a3d6c700e132b2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760780289080277787,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ef8d22f0acfad161fd7159db2ab3aaa,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a58eff517402787ff5f9c7a733a3863ce25d66a53c0f5e8fd09bd24cfa911d5,PodSandboxId:49112bf07bb4fab709b8c109c39bbc8b51964b95fb8c72dc739bfa5360780e21,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760780289074322882,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896068ef5175d9af2bc27f8f789b5ff4,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259
,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2a9247b81b38d66905d17b1c7125eaaca22be0c11e84329c01545bf8a63d3f7,PodSandboxId:bb01f32caae719bbc3469508464a92bb8a47233ccc330705fa6ae20a98e3a7a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760780224492759843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gkqrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80039a0f-d663-4568-85a8-f35ea7394b79,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubern
etes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8efdbf89-8d69-4412-864c-6db98b23c34c name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:38:48 pause-251981 crio[2808]: time="2025-10-18 09:38:48.976238793Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d70181b1-2d3f-4502-a661-90d59b3aa146 name=/runtime.v1.RuntimeService/Version
	Oct 18 09:38:48 pause-251981 crio[2808]: time="2025-10-18 09:38:48.976618468Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d70181b1-2d3f-4502-a661-90d59b3aa146 name=/runtime.v1.RuntimeService/Version
	Oct 18 09:38:48 pause-251981 crio[2808]: time="2025-10-18 09:38:48.978734942Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6dac7245-ce56-4a67-9f15-392b0c93d59b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 09:38:48 pause-251981 crio[2808]: time="2025-10-18 09:38:48.979534343Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760780328979501881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6dac7245-ce56-4a67-9f15-392b0c93d59b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 09:38:48 pause-251981 crio[2808]: time="2025-10-18 09:38:48.980578157Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8ed0a12c-cd8d-442b-92ec-b247e9c78034 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:38:48 pause-251981 crio[2808]: time="2025-10-18 09:38:48.980669500Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8ed0a12c-cd8d-442b-92ec-b247e9c78034 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:38:48 pause-251981 crio[2808]: time="2025-10-18 09:38:48.981087488Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:892b972e09f42c98bce80d50a7138fa3b1c7f065a30b709d83c34a1be5641266,PodSandboxId:bc13c2072329b53d87a66bc042d13c327338a7265c42fa170c387eaf90aca7a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760780311203743281,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hh69n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91ff45f3-e63f-4bc3-8bf8-d805a6f89864,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e627267a6b15f9dab3004645432031a2cb9b36cc81e305dd7962c2f8fd477595,PodSandboxId:76e5fecbb9a139ab265bddec8feb0944f10440e52f9cb7a494a3d6c700e132b2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760780305439137500,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ef8d22f0acfad161fd7159db2ab3aaa,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kuberne
tes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcb2d4664bef892714e316cbc572e8176f8bfdd60aa897517075752efe53ee19,PodSandboxId:49112bf07bb4fab709b8c109c39bbc8b51964b95fb8c72dc739bfa5360780e21,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760780305454024396,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896068ef5175d9af2bc27f8f789b5ff4,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.port
s: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a752a128dd5424649f37cb4d8a0b6942b129d7c8521ee000cfef644d829c1465,PodSandboxId:79390eaf4de1648fee543465081e6e27295387e9e5f82a618f39487d384809da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760780305478293962,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
796b7d70b0f8a722cf83fe465c4b2017,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d50dc2290ce14d7255ceeee4d0319c02fbeb45ec099e5320cc8ed64adfe1f6ea,PodSandboxId:c7058358f880617efa02f2d27e9303284b35433df535d99b1b5187a7072d7f5b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760780305397155654,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernet
es.pod.name: kube-apiserver-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58baf79f62aa5f6561d388f3289f8931,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:491f718eb22fdfde908afbfc80efb672e3b0add928d31a9889c118be7ccb2c74,PodSandboxId:44f3a3c9e0095aadfac61abe79a1b3390b096d88e35c84458b01cd930d104b69,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17607
80290010005371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gkqrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80039a0f-d663-4568-85a8-f35ea7394b79,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc012585626724b3556d7a6649f793af6fb0753ed46c789bce7cb3ea391413bb,PodSandboxId:bc13c2072329b53d87a66bc042d13c327338a7265c42fa
170c387eaf90aca7a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760780289299644884,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hh69n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91ff45f3-e63f-4bc3-8bf8-d805a6f89864,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4129f2037f9f283b2dcc43160d4a1cceb6aa28b44ee94c719f0b35e305d9b153,PodSandboxId:c7058358f880617efa02f2d27e9303284b35433df535d99b1b5187a7072d7f5b,Metadata:&Conta
inerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760780289171309478,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58baf79f62aa5f6561d388f3289f8931,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:201cd2c1e158f4cd558f4e00ed2a29b45d3ac5d6699b526aa3173435
fc58e0e7,PodSandboxId:79390eaf4de1648fee543465081e6e27295387e9e5f82a618f39487d384809da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760780289121980073,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 796b7d70b0f8a722cf83fe465c4b2017,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02ee425859eef16741c99f5436243e9681dfb3da34b4721472f7361d29ba47dc,PodSandboxId:76e5fecbb9a139ab265bddec8feb0944f10440e52f9cb7a494a3d6c700e132b2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760780289080277787,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ef8d22f0acfad161fd7159db2ab3aaa,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a58eff517402787ff5f9c7a733a3863ce25d66a53c0f5e8fd09bd24cfa911d5,PodSandboxId:49112bf07bb4fab709b8c109c39bbc8b51964b95fb8c72dc739bfa5360780e21,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760780289074322882,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-251981,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896068ef5175d9af2bc27f8f789b5ff4,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259
,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2a9247b81b38d66905d17b1c7125eaaca22be0c11e84329c01545bf8a63d3f7,PodSandboxId:bb01f32caae719bbc3469508464a92bb8a47233ccc330705fa6ae20a98e3a7a6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760780224492759843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gkqrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80039a0f-d663-4568-85a8-f35ea7394b79,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubern
etes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8ed0a12c-cd8d-442b-92ec-b247e9c78034 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	892b972e09f42       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   17 seconds ago       Running             kube-proxy                2                   bc13c2072329b       kube-proxy-hh69n
	a752a128dd542       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   23 seconds ago       Running             kube-controller-manager   2                   79390eaf4de16       kube-controller-manager-pause-251981
	dcb2d4664bef8       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   23 seconds ago       Running             kube-scheduler            2                   49112bf07bb4f       kube-scheduler-pause-251981
	e627267a6b15f       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   23 seconds ago       Running             etcd                      2                   76e5fecbb9a13       etcd-pause-251981
	d50dc2290ce14       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   23 seconds ago       Running             kube-apiserver            2                   c7058358f8806       kube-apiserver-pause-251981
	491f718eb22fd       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   39 seconds ago       Running             coredns                   1                   44f3a3c9e0095       coredns-66bc5c9577-gkqrn
	fc01258562672       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   39 seconds ago       Exited              kube-proxy                1                   bc13c2072329b       kube-proxy-hh69n
	4129f2037f9f2       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   39 seconds ago       Exited              kube-apiserver            1                   c7058358f8806       kube-apiserver-pause-251981
	201cd2c1e158f       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   39 seconds ago       Exited              kube-controller-manager   1                   79390eaf4de16       kube-controller-manager-pause-251981
	02ee425859eef       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   40 seconds ago       Exited              etcd                      1                   76e5fecbb9a13       etcd-pause-251981
	8a58eff517402       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   40 seconds ago       Exited              kube-scheduler            1                   49112bf07bb4f       kube-scheduler-pause-251981
	b2a9247b81b38       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   About a minute ago   Exited              coredns                   0                   bb01f32caae71       coredns-66bc5c9577-gkqrn
	
	
	==> coredns [491f718eb22fdfde908afbfc80efb672e3b0add928d31a9889c118be7ccb2c74] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43983 - 15372 "HINFO IN 6681702715175531830.7065157308043952480. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.079482259s
	
	
	==> coredns [b2a9247b81b38d66905d17b1c7125eaaca22be0c11e84329c01545bf8a63d3f7] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-251981
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-251981
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=pause-251981
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_36_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:36:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-251981
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:38:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:38:28 +0000   Sat, 18 Oct 2025 09:36:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:38:28 +0000   Sat, 18 Oct 2025 09:36:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:38:28 +0000   Sat, 18 Oct 2025 09:36:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:38:28 +0000   Sat, 18 Oct 2025 09:36:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.16
	  Hostname:    pause-251981
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 75698a6d3efc46e289dc91cd9c46d9b8
	  System UUID:                75698a6d-3efc-46e2-89dc-91cd9c46d9b8
	  Boot ID:                    a25046a1-fd19-4efa-a6e1-6f0b9b494494
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-gkqrn                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     106s
	  kube-system                 etcd-pause-251981                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         112s
	  kube-system                 kube-apiserver-pause-251981             250m (12%)    0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-pause-251981    200m (10%)    0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-hh69n                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-pause-251981             100m (5%)     0 (0%)      0 (0%)           0 (0%)         112s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 104s               kube-proxy       
	  Normal  Starting                 17s                kube-proxy       
	  Normal  NodeHasSufficientPID     112s               kubelet          Node pause-251981 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  112s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  112s               kubelet          Node pause-251981 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s               kubelet          Node pause-251981 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 112s               kubelet          Starting kubelet.
	  Normal  NodeReady                111s               kubelet          Node pause-251981 status is now: NodeReady
	  Normal  RegisteredNode           107s               node-controller  Node pause-251981 event: Registered Node pause-251981 in Controller
	  Normal  Starting                 25s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  25s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  24s (x8 over 25s)  kubelet          Node pause-251981 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 25s)  kubelet          Node pause-251981 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 25s)  kubelet          Node pause-251981 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15s                node-controller  Node pause-251981 event: Registered Node pause-251981 in Controller
	
	
	==> dmesg <==
	[Oct18 09:36] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000000] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001500] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.013233] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.190833] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000027] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000006] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.109400] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.115458] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.105095] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.159929] kauditd_printk_skb: 171 callbacks suppressed
	[Oct18 09:37] kauditd_printk_skb: 18 callbacks suppressed
	[ +10.713733] kauditd_printk_skb: 219 callbacks suppressed
	[ +21.700032] kauditd_printk_skb: 38 callbacks suppressed
	[Oct18 09:38] kauditd_printk_skb: 56 callbacks suppressed
	[  +0.140058] kauditd_printk_skb: 254 callbacks suppressed
	[  +6.723761] kauditd_printk_skb: 81 callbacks suppressed
	
	
	==> etcd [02ee425859eef16741c99f5436243e9681dfb3da34b4721472f7361d29ba47dc] <==
	{"level":"warn","ts":"2025-10-18T09:38:12.315462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:38:12.327569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:38:12.338650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:38:12.352329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:38:12.378305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:38:12.408672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:38:12.482662Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44646","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T09:38:21.314562Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-18T09:38:21.314812Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-251981","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.16:2380"],"advertise-client-urls":["https://192.168.72.16:2379"]}
	{"level":"error","ts":"2025-10-18T09:38:21.315074Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T09:38:21.315321Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T09:38:21.317381Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T09:38:21.317478Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"3a93d4f7634551e8","current-leader-member-id":"3a93d4f7634551e8"}
	{"level":"info","ts":"2025-10-18T09:38:21.317582Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-18T09:38:21.317595Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-18T09:38:21.318071Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.72.16:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T09:38:21.318256Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.72.16:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T09:38:21.318291Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.72.16:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-18T09:38:21.318224Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T09:38:21.318358Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T09:38:21.318383Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T09:38:21.325017Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.72.16:2380"}
	{"level":"error","ts":"2025-10-18T09:38:21.325171Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.72.16:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T09:38:21.325212Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.72.16:2380"}
	{"level":"info","ts":"2025-10-18T09:38:21.325232Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-251981","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.16:2380"],"advertise-client-urls":["https://192.168.72.16:2379"]}
	
	
	==> etcd [e627267a6b15f9dab3004645432031a2cb9b36cc81e305dd7962c2f8fd477595] <==
	{"level":"warn","ts":"2025-10-18T09:38:30.980991Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T09:38:30.076355Z","time spent":"904.6148ms","remote":"127.0.0.1:36506","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":28,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-251981\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-251981\" value_size:3288 >> failure:<>"}
	{"level":"warn","ts":"2025-10-18T09:38:30.981065Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"527.568901ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:basic-user\" limit:1 ","response":"range_response_count:1 size:678"}
	{"level":"info","ts":"2025-10-18T09:38:30.981092Z","caller":"traceutil/trace.go:172","msg":"trace[1902514108] range","detail":"{range_begin:/registry/clusterroles/system:basic-user; range_end:; response_count:1; response_revision:460; }","duration":"527.595863ms","start":"2025-10-18T09:38:30.453487Z","end":"2025-10-18T09:38:30.981083Z","steps":["trace[1902514108] 'agreement among raft nodes before linearized reading'  (duration: 527.512417ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T09:38:30.981115Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T09:38:30.453471Z","time spent":"527.637557ms","remote":"127.0.0.1:36860","response type":"/etcdserverpb.KV/Range","request count":0,"request size":44,"response count":1,"response size":701,"request content":"key:\"/registry/clusterroles/system:basic-user\" limit:1 "}
	{"level":"warn","ts":"2025-10-18T09:38:30.981199Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"528.045938ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-gkqrn\" limit:1 ","response":"range_response_count:1 size:5450"}
	{"level":"info","ts":"2025-10-18T09:38:30.981222Z","caller":"traceutil/trace.go:172","msg":"trace[54133699] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-gkqrn; range_end:; response_count:1; response_revision:460; }","duration":"528.069081ms","start":"2025-10-18T09:38:30.453146Z","end":"2025-10-18T09:38:30.981215Z","steps":["trace[54133699] 'agreement among raft nodes before linearized reading'  (duration: 528.000332ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T09:38:30.981246Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T09:38:30.453142Z","time spent":"528.09239ms","remote":"127.0.0.1:36506","response type":"/etcdserverpb.KV/Range","request count":0,"request size":55,"response count":1,"response size":5473,"request content":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-gkqrn\" limit:1 "}
	{"level":"info","ts":"2025-10-18T09:38:31.298385Z","caller":"traceutil/trace.go:172","msg":"trace[1031974631] linearizableReadLoop","detail":"{readStateIndex:496; appliedIndex:496; }","duration":"295.516453ms","start":"2025-10-18T09:38:31.002839Z","end":"2025-10-18T09:38:31.298356Z","steps":["trace[1031974631] 'read index received'  (duration: 295.490741ms)","trace[1031974631] 'applied index is now lower than readState.Index'  (duration: 24.513µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T09:38:31.298680Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"295.819825ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/view\" limit:1 ","response":"range_response_count:1 size:2208"}
	{"level":"info","ts":"2025-10-18T09:38:31.298705Z","caller":"traceutil/trace.go:172","msg":"trace[2060153418] range","detail":"{range_begin:/registry/clusterroles/view; range_end:; response_count:1; response_revision:460; }","duration":"295.864031ms","start":"2025-10-18T09:38:31.002834Z","end":"2025-10-18T09:38:31.298698Z","steps":["trace[2060153418] 'agreement among raft nodes before linearized reading'  (duration: 295.717559ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:38:31.298997Z","caller":"traceutil/trace.go:172","msg":"trace[2094374882] transaction","detail":"{read_only:false; response_revision:461; number_of_response:1; }","duration":"297.790673ms","start":"2025-10-18T09:38:31.001195Z","end":"2025-10-18T09:38:31.298985Z","steps":["trace[2094374882] 'process raft request'  (duration: 297.643431ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:38:31.503103Z","caller":"traceutil/trace.go:172","msg":"trace[861030731] linearizableReadLoop","detail":"{readStateIndex:497; appliedIndex:497; }","duration":"200.048267ms","start":"2025-10-18T09:38:31.302943Z","end":"2025-10-18T09:38:31.502991Z","steps":["trace[861030731] 'read index received'  (duration: 200.041117ms)","trace[861030731] 'applied index is now lower than readState.Index'  (duration: 6.017µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T09:38:31.575796Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"272.844706ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-admin\" limit:1 ","response":"range_response_count:1 size:840"}
	{"level":"info","ts":"2025-10-18T09:38:31.576127Z","caller":"traceutil/trace.go:172","msg":"trace[1881401895] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-admin; range_end:; response_count:1; response_revision:461; }","duration":"273.194912ms","start":"2025-10-18T09:38:31.302925Z","end":"2025-10-18T09:38:31.576120Z","steps":["trace[1881401895] 'agreement among raft nodes before linearized reading'  (duration: 200.857524ms)","trace[1881401895] 'range keys from in-memory index tree'  (duration: 71.868836ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T09:38:31.576205Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"271.830966ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-pause-251981\" limit:1 ","response":"range_response_count:1 size:7219"}
	{"level":"info","ts":"2025-10-18T09:38:31.576089Z","caller":"traceutil/trace.go:172","msg":"trace[705410322] transaction","detail":"{read_only:false; response_revision:462; number_of_response:1; }","duration":"379.766258ms","start":"2025-10-18T09:38:31.196308Z","end":"2025-10-18T09:38:31.576074Z","steps":["trace[705410322] 'process raft request'  (duration: 307.285464ms)","trace[705410322] 'compare'  (duration: 72.328732ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T09:38:31.579144Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T09:38:31.196286Z","time spent":"382.805233ms","remote":"127.0.0.1:36292","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":762,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kube-proxy-hh69n.186f8c5d3011ce84\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-proxy-hh69n.186f8c5d3011ce84\" value_size:682 lease:5902136596450740629 >> failure:<>"}
	{"level":"info","ts":"2025-10-18T09:38:31.581259Z","caller":"traceutil/trace.go:172","msg":"trace[1247655257] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-pause-251981; range_end:; response_count:1; response_revision:462; }","duration":"276.879486ms","start":"2025-10-18T09:38:31.304361Z","end":"2025-10-18T09:38:31.581241Z","steps":["trace[1247655257] 'agreement among raft nodes before linearized reading'  (duration: 271.637739ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T09:38:31.940471Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"283.142036ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:kube-dns\" limit:1 ","response":"range_response_count:1 size:576"}
	{"level":"info","ts":"2025-10-18T09:38:31.940560Z","caller":"traceutil/trace.go:172","msg":"trace[435765997] range","detail":"{range_begin:/registry/clusterroles/system:kube-dns; range_end:; response_count:1; response_revision:466; }","duration":"283.303681ms","start":"2025-10-18T09:38:31.657244Z","end":"2025-10-18T09:38:31.940548Z","steps":["trace[435765997] 'agreement among raft nodes before linearized reading'  (duration: 92.771258ms)","trace[435765997] 'range keys from in-memory index tree'  (duration: 190.288897ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T09:38:31.940808Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"190.66082ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5902136596450740732 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-pause-251981\" mod_revision:387 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-251981\" value_size:6744 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-251981\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-18T09:38:31.940933Z","caller":"traceutil/trace.go:172","msg":"trace[1672429614] transaction","detail":"{read_only:false; response_revision:467; number_of_response:1; }","duration":"287.786473ms","start":"2025-10-18T09:38:31.653137Z","end":"2025-10-18T09:38:31.940923Z","steps":["trace[1672429614] 'process raft request'  (duration: 96.959159ms)","trace[1672429614] 'compare'  (duration: 190.219894ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T09:38:37.635016Z","caller":"traceutil/trace.go:172","msg":"trace[1502992284] transaction","detail":"{read_only:false; response_revision:483; number_of_response:1; }","duration":"215.778349ms","start":"2025-10-18T09:38:37.419220Z","end":"2025-10-18T09:38:37.634998Z","steps":["trace[1502992284] 'process raft request'  (duration: 215.677471ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:38:37.860267Z","caller":"traceutil/trace.go:172","msg":"trace[1486154118] transaction","detail":"{read_only:false; response_revision:484; number_of_response:1; }","duration":"205.00964ms","start":"2025-10-18T09:38:37.655236Z","end":"2025-10-18T09:38:37.860245Z","steps":["trace[1486154118] 'process raft request'  (duration: 203.126044ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:38:49 up 2 min,  0 users,  load average: 1.07, 0.43, 0.16
	Linux pause-251981 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [4129f2037f9f283b2dcc43160d4a1cceb6aa28b44ee94c719f0b35e305d9b153] <==
	{"level":"warn","ts":"2025-10-18T09:38:15.642565Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00102a960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":86,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-10-18T09:38:15.666758Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00102a960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":87,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-10-18T09:38:15.690728Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00102a960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":88,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-10-18T09:38:15.715292Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00102a960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":89,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-10-18T09:38:15.739101Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00102a960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":90,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-10-18T09:38:15.764680Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00102a960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":91,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-10-18T09:38:15.789463Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00102a960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":92,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-10-18T09:38:15.814017Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00102a960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":93,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-10-18T09:38:15.840511Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00102a960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":94,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-10-18T09:38:15.865537Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00102a960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":95,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-10-18T09:38:15.892951Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00102a960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":96,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-10-18T09:38:15.918383Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00102a960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":97,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-10-18T09:38:15.945528Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00102a960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":98,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-10-18T09:38:15.970131Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00102a960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":99,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	E1018 09:38:15.970227       1 controller.go:97] Error removing old endpoints from kubernetes service: rpc error: code = Canceled desc = grpc: the client connection is closing
	W1018 09:38:16.184641       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1018 09:38:16.185084       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1018 09:38:17.183794       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1018 09:38:17.183798       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	E1018 09:38:18.184646       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1018 09:38:18.184874       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1018 09:38:19.184647       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1018 09:38:19.184672       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	W1018 09:38:20.184339       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1018 09:38:20.184575       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-apiserver [d50dc2290ce14d7255ceeee4d0319c02fbeb45ec099e5320cc8ed64adfe1f6ea] <==
	I1018 09:38:28.791297       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 09:38:28.791471       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:38:28.792780       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1018 09:38:28.792839       1 aggregator.go:171] initial CRD sync complete...
	I1018 09:38:28.792852       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 09:38:28.792861       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 09:38:28.792868       1 cache.go:39] Caches are synced for autoregister controller
	I1018 09:38:28.820524       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 09:38:28.820653       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 09:38:28.820914       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 09:38:28.821271       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 09:38:28.821297       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 09:38:28.822510       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 09:38:28.822681       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 09:38:28.826254       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 09:38:28.835198       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 09:38:29.643816       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:38:30.450520       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1018 09:38:32.259619       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.72.16]
	I1018 09:38:32.261353       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:38:32.267963       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:38:32.520916       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:38:32.575080       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 09:38:32.609579       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:38:32.617463       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [201cd2c1e158f4cd558f4e00ed2a29b45d3ac5d6699b526aa3173435fc58e0e7] <==
	I1018 09:38:11.249292       1 serving.go:386] Generated self-signed cert in-memory
	I1018 09:38:12.261219       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1018 09:38:12.261267       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:38:12.266329       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1018 09:38:12.267465       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1018 09:38:12.267652       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1018 09:38:12.267875       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [a752a128dd5424649f37cb4d8a0b6942b129d7c8521ee000cfef644d829c1465] <==
	I1018 09:38:34.057347       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 09:38:34.057618       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 09:38:34.057654       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 09:38:34.061582       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:38:34.062873       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 09:38:34.065114       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 09:38:34.066323       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 09:38:34.066396       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 09:38:34.071806       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 09:38:34.071864       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 09:38:34.072208       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 09:38:34.074163       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 09:38:34.075322       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 09:38:34.076688       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 09:38:34.076893       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 09:38:34.077285       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 09:38:34.081265       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 09:38:34.085115       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 09:38:34.094556       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 09:38:34.101387       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 09:38:34.122845       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1018 09:38:34.199969       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:38:34.199987       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 09:38:34.199993       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 09:38:34.224132       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [892b972e09f42c98bce80d50a7138fa3b1c7f065a30b709d83c34a1be5641266] <==
	I1018 09:38:31.655662       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:38:31.756632       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:38:31.756679       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.72.16"]
	E1018 09:38:31.756801       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:38:31.801631       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1018 09:38:31.801718       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1018 09:38:31.801758       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:38:31.812225       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:38:31.812747       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:38:31.812763       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:38:31.814355       1 config.go:200] "Starting service config controller"
	I1018 09:38:31.814385       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:38:31.814759       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:38:31.814793       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:38:31.814847       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:38:31.814864       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:38:31.820262       1 config.go:309] "Starting node config controller"
	I1018 09:38:31.821302       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:38:31.821355       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:38:31.915053       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:38:31.915260       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:38:31.915364       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [fc012585626724b3556d7a6649f793af6fb0753ed46c789bce7cb3ea391413bb] <==
	
	
	==> kube-scheduler [8a58eff517402787ff5f9c7a733a3863ce25d66a53c0f5e8fd09bd24cfa911d5] <==
	E1018 09:38:17.101612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.72.16:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.72.16:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 09:38:17.144677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.72.16:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.72.16:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 09:38:17.148380       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.72.16:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.72.16:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 09:38:17.205360       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.72.16:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.72.16:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 09:38:17.349079       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.72.16:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.72.16:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 09:38:17.351946       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.72.16:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.72.16:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 09:38:17.376590       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.72.16:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.72.16:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 09:38:17.689075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.72.16:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.72.16:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 09:38:17.849955       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.72.16:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.72.16:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 09:38:17.878745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.72.16:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.72.16:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 09:38:17.905776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.72.16:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.72.16:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 09:38:19.960245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.72.16:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.72.16:8443: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 09:38:20.378994       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.72.16:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.72.16:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 09:38:20.532330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.72.16:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.72.16:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 09:38:20.789176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.72.16:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.72.16:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 09:38:20.944763       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.72.16:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.72.16:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 09:38:21.439210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.72.16:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.72.16:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 09:38:21.456065       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	I1018 09:38:21.456504       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1018 09:38:21.456524       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1018 09:38:21.456573       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E1018 09:38:21.456596       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:38:21.456646       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:38:21.456650       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1018 09:38:21.456715       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [dcb2d4664bef892714e316cbc572e8176f8bfdd60aa897517075752efe53ee19] <==
	I1018 09:38:27.875011       1 serving.go:386] Generated self-signed cert in-memory
	W1018 09:38:28.736942       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 09:38:28.736977       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 09:38:28.736986       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 09:38:28.736993       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 09:38:28.783606       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 09:38:28.783654       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:38:28.788301       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 09:38:28.789881       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 09:38:28.789978       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:38:28.808021       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:38:28.908185       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:38:28 pause-251981 kubelet[3827]: I1018 09:38:28.794498    3827 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-251981"
	Oct 18 09:38:28 pause-251981 kubelet[3827]: E1018 09:38:28.823165    3827 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-251981\" already exists" pod="kube-system/etcd-pause-251981"
	Oct 18 09:38:28 pause-251981 kubelet[3827]: I1018 09:38:28.823221    3827 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-251981"
	Oct 18 09:38:28 pause-251981 kubelet[3827]: E1018 09:38:28.851025    3827 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-251981\" already exists" pod="kube-system/kube-apiserver-pause-251981"
	Oct 18 09:38:28 pause-251981 kubelet[3827]: I1018 09:38:28.851080    3827 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-251981"
	Oct 18 09:38:28 pause-251981 kubelet[3827]: I1018 09:38:28.854133    3827 kubelet_node_status.go:124] "Node was previously registered" node="pause-251981"
	Oct 18 09:38:28 pause-251981 kubelet[3827]: I1018 09:38:28.854260    3827 kubelet_node_status.go:78] "Successfully registered node" node="pause-251981"
	Oct 18 09:38:28 pause-251981 kubelet[3827]: I1018 09:38:28.854294    3827 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 18 09:38:28 pause-251981 kubelet[3827]: I1018 09:38:28.855687    3827 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 18 09:38:28 pause-251981 kubelet[3827]: E1018 09:38:28.872070    3827 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-251981\" already exists" pod="kube-system/kube-controller-manager-pause-251981"
	Oct 18 09:38:29 pause-251981 kubelet[3827]: I1018 09:38:29.068072    3827 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-251981"
	Oct 18 09:38:29 pause-251981 kubelet[3827]: I1018 09:38:29.068331    3827 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-251981"
	Oct 18 09:38:29 pause-251981 kubelet[3827]: E1018 09:38:29.084314    3827 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-251981\" already exists" pod="kube-system/kube-scheduler-pause-251981"
	Oct 18 09:38:29 pause-251981 kubelet[3827]: E1018 09:38:29.085591    3827 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-251981\" already exists" pod="kube-system/etcd-pause-251981"
	Oct 18 09:38:29 pause-251981 kubelet[3827]: I1018 09:38:29.681934    3827 apiserver.go:52] "Watching apiserver"
	Oct 18 09:38:29 pause-251981 kubelet[3827]: I1018 09:38:29.737277    3827 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 18 09:38:29 pause-251981 kubelet[3827]: I1018 09:38:29.759528    3827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91ff45f3-e63f-4bc3-8bf8-d805a6f89864-lib-modules\") pod \"kube-proxy-hh69n\" (UID: \"91ff45f3-e63f-4bc3-8bf8-d805a6f89864\") " pod="kube-system/kube-proxy-hh69n"
	Oct 18 09:38:29 pause-251981 kubelet[3827]: I1018 09:38:29.759642    3827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91ff45f3-e63f-4bc3-8bf8-d805a6f89864-xtables-lock\") pod \"kube-proxy-hh69n\" (UID: \"91ff45f3-e63f-4bc3-8bf8-d805a6f89864\") " pod="kube-system/kube-proxy-hh69n"
	Oct 18 09:38:30 pause-251981 kubelet[3827]: I1018 09:38:30.072310    3827 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-251981"
	Oct 18 09:38:30 pause-251981 kubelet[3827]: E1018 09:38:30.997853    3827 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-251981\" already exists" pod="kube-system/kube-scheduler-pause-251981"
	Oct 18 09:38:31 pause-251981 kubelet[3827]: I1018 09:38:31.189588    3827 scope.go:117] "RemoveContainer" containerID="fc012585626724b3556d7a6649f793af6fb0753ed46c789bce7cb3ea391413bb"
	Oct 18 09:38:34 pause-251981 kubelet[3827]: E1018 09:38:34.925051    3827 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760780314924211064  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 18 09:38:34 pause-251981 kubelet[3827]: E1018 09:38:34.925107    3827 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760780314924211064  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 18 09:38:44 pause-251981 kubelet[3827]: E1018 09:38:44.927865    3827 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760780324927234157  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 18 09:38:44 pause-251981 kubelet[3827]: E1018 09:38:44.927924    3827 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760780324927234157  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-251981 -n pause-251981
helpers_test.go:269: (dbg) Run:  kubectl --context pause-251981 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (71.75s)

                                                
                                    

Test pass (280/324)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.05
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.16
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 3.71
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.15
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.67
22 TestOffline 54.35
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 196.99
31 TestAddons/serial/GCPAuth/Namespaces 0.15
32 TestAddons/serial/GCPAuth/FakeCredentials 10.52
35 TestAddons/parallel/Registry 19.01
36 TestAddons/parallel/RegistryCreds 0.74
38 TestAddons/parallel/InspektorGadget 6.37
39 TestAddons/parallel/MetricsServer 6.39
41 TestAddons/parallel/CSI 54.58
42 TestAddons/parallel/Headlamp 22.05
43 TestAddons/parallel/CloudSpanner 5.64
44 TestAddons/parallel/LocalPath 55.98
45 TestAddons/parallel/NvidiaDevicePlugin 6.69
46 TestAddons/parallel/Yakd 11.99
48 TestAddons/StoppedEnableDisable 85.93
49 TestCertOptions 58.03
50 TestCertExpiration 277.63
52 TestForceSystemdFlag 43.37
53 TestForceSystemdEnv 68.3
55 TestKVMDriverInstallOrUpdate 0.82
59 TestErrorSpam/setup 38.13
60 TestErrorSpam/start 0.38
61 TestErrorSpam/status 0.8
62 TestErrorSpam/pause 1.69
63 TestErrorSpam/unpause 1.96
64 TestErrorSpam/stop 5.2
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 85.99
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 156.58
71 TestFunctional/serial/KubeContext 0.04
72 TestFunctional/serial/KubectlGetPods 0.1
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.36
76 TestFunctional/serial/CacheCmd/cache/add_local 1.98
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.77
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
84 TestFunctional/serial/ExtraConfig 372.45
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.48
87 TestFunctional/serial/LogsFileCmd 1.46
88 TestFunctional/serial/InvalidService 4.42
90 TestFunctional/parallel/ConfigCmd 0.34
91 TestFunctional/parallel/DashboardCmd 44.73
92 TestFunctional/parallel/DryRun 0.3
93 TestFunctional/parallel/InternationalLanguage 0.16
94 TestFunctional/parallel/StatusCmd 1.15
98 TestFunctional/parallel/ServiceCmdConnect 9.58
99 TestFunctional/parallel/AddonsCmd 0.14
100 TestFunctional/parallel/PersistentVolumeClaim 48.46
102 TestFunctional/parallel/SSHCmd 0.4
103 TestFunctional/parallel/CpCmd 1.43
104 TestFunctional/parallel/MySQL 23.4
105 TestFunctional/parallel/FileSync 0.21
106 TestFunctional/parallel/CertSync 1.4
110 TestFunctional/parallel/NodeLabels 0.06
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
114 TestFunctional/parallel/License 0.39
115 TestFunctional/parallel/Version/short 0.05
116 TestFunctional/parallel/Version/components 0.45
117 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
118 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
119 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
120 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
121 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
122 TestFunctional/parallel/ImageCommands/ImageListJson 0.74
123 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
125 TestFunctional/parallel/ImageCommands/Setup 1.5
126 TestFunctional/parallel/ServiceCmd/DeployApp 8.19
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.54
137 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.95
138 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.58
139 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.53
140 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
141 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.71
142 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.57
143 TestFunctional/parallel/ServiceCmd/List 0.48
144 TestFunctional/parallel/ServiceCmd/JSONOutput 0.47
145 TestFunctional/parallel/ServiceCmd/HTTPS 0.3
146 TestFunctional/parallel/ServiceCmd/Format 0.31
147 TestFunctional/parallel/ServiceCmd/URL 0.39
148 TestFunctional/parallel/ProfileCmd/profile_not_create 0.48
149 TestFunctional/parallel/ProfileCmd/profile_list 0.53
150 TestFunctional/parallel/MountCmd/any-port 24.7
151 TestFunctional/parallel/ProfileCmd/profile_json_output 0.47
152 TestFunctional/parallel/MountCmd/specific-port 1.94
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.82
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 235.76
162 TestMultiControlPlane/serial/DeployApp 7.63
163 TestMultiControlPlane/serial/PingHostFromPods 1.23
164 TestMultiControlPlane/serial/AddWorkerNode 44.57
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.93
167 TestMultiControlPlane/serial/CopyFile 13.56
168 TestMultiControlPlane/serial/StopSecondaryNode 82.73
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.7
170 TestMultiControlPlane/serial/RestartSecondaryNode 37.09
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.97
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 384.45
173 TestMultiControlPlane/serial/DeleteSecondaryNode 18.65
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.66
175 TestMultiControlPlane/serial/StopCluster 243.87
176 TestMultiControlPlane/serial/RestartCluster 106.42
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.66
178 TestMultiControlPlane/serial/AddSecondaryNode 75.6
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.94
183 TestJSONOutput/start/Command 56.45
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.82
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.7
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 7.27
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.22
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 84.38
215 TestMountStart/serial/StartWithMountFirst 21.36
216 TestMountStart/serial/VerifyMountFirst 0.38
217 TestMountStart/serial/StartWithMountSecond 23.51
218 TestMountStart/serial/VerifyMountSecond 0.39
219 TestMountStart/serial/DeleteFirst 0.75
220 TestMountStart/serial/VerifyMountPostDelete 0.38
221 TestMountStart/serial/Stop 1.3
222 TestMountStart/serial/RestartStopped 17.33
223 TestMountStart/serial/VerifyMountPostStop 0.39
226 TestMultiNode/serial/FreshStart2Nodes 131.34
227 TestMultiNode/serial/DeployApp2Nodes 6.5
228 TestMultiNode/serial/PingHostFrom2Pods 0.81
229 TestMultiNode/serial/AddNode 42.93
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.61
232 TestMultiNode/serial/CopyFile 7.42
233 TestMultiNode/serial/StopNode 2.47
234 TestMultiNode/serial/StartAfterStop 38.91
235 TestMultiNode/serial/RestartKeepsNodes 335.96
236 TestMultiNode/serial/DeleteNode 2.77
237 TestMultiNode/serial/StopMultiNode 165.87
238 TestMultiNode/serial/RestartMultiNode 86.37
239 TestMultiNode/serial/ValidateNameConflict 43.03
246 TestScheduledStopUnix 112.18
250 TestRunningBinaryUpgrade 149.92
252 TestKubernetesUpgrade 265.69
256 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
263 TestNoKubernetes/serial/StartWithK8s 82.96
264 TestNoKubernetes/serial/StartWithStopK8s 45.97
272 TestNetworkPlugins/group/false 3.5
273 TestNoKubernetes/serial/Start 24.88
277 TestStoppedBinaryUpgrade/Setup 0.89
278 TestStoppedBinaryUpgrade/Upgrade 111.71
280 TestPause/serial/Start 106.65
281 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
282 TestNoKubernetes/serial/ProfileList 0.88
283 TestNoKubernetes/serial/Stop 1.26
284 TestNoKubernetes/serial/StartNoArgs 60.49
285 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
286 TestStoppedBinaryUpgrade/MinikubeLogs 1.18
289 TestStartStop/group/old-k8s-version/serial/FirstStart 102.08
291 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 54.97
293 TestStartStop/group/embed-certs/serial/FirstStart 95.98
294 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.35
295 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.15
296 TestStartStop/group/default-k8s-diff-port/serial/Stop 83.24
297 TestStartStop/group/old-k8s-version/serial/DeployApp 10.33
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.14
299 TestStartStop/group/old-k8s-version/serial/Stop 70.92
300 TestStartStop/group/embed-certs/serial/DeployApp 10.29
301 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.1
302 TestStartStop/group/embed-certs/serial/Stop 85.41
303 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
304 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 46.67
305 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
306 TestStartStop/group/old-k8s-version/serial/SecondStart 58.13
308 TestStartStop/group/no-preload/serial/FirstStart 90.84
309 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 11.01
310 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
311 TestStartStop/group/embed-certs/serial/SecondStart 55.05
312 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
313 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.29
314 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.58
315 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 15.01
317 TestStartStop/group/newest-cni/serial/FirstStart 56.44
318 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
319 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.29
320 TestStartStop/group/old-k8s-version/serial/Pause 4.09
321 TestNetworkPlugins/group/auto/Start 88.55
322 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
323 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
324 TestStartStop/group/no-preload/serial/DeployApp 11.36
325 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
326 TestStartStop/group/embed-certs/serial/Pause 3.35
327 TestNetworkPlugins/group/calico/Start 73.55
328 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.38
329 TestStartStop/group/newest-cni/serial/DeployApp 0
330 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.42
331 TestStartStop/group/no-preload/serial/Stop 88.44
332 TestStartStop/group/newest-cni/serial/Stop 7.44
333 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
334 TestStartStop/group/newest-cni/serial/SecondStart 44.98
335 TestNetworkPlugins/group/auto/KubeletFlags 0.28
336 TestNetworkPlugins/group/auto/NetCatPod 10.32
337 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
339 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
340 TestStartStop/group/newest-cni/serial/Pause 3.07
341 TestNetworkPlugins/group/custom-flannel/Start 74.6
342 TestNetworkPlugins/group/auto/DNS 0.17
343 TestNetworkPlugins/group/auto/Localhost 0.13
344 TestNetworkPlugins/group/auto/HairPin 0.13
345 TestNetworkPlugins/group/calico/ControllerPod 6.01
346 TestNetworkPlugins/group/calico/KubeletFlags 0.22
347 TestNetworkPlugins/group/calico/NetCatPod 11.27
348 TestNetworkPlugins/group/kindnet/Start 64.97
349 TestNetworkPlugins/group/calico/DNS 0.17
350 TestNetworkPlugins/group/calico/Localhost 0.15
351 TestNetworkPlugins/group/calico/HairPin 0.16
352 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.63
353 TestStartStop/group/no-preload/serial/SecondStart 74.67
354 TestNetworkPlugins/group/flannel/Start 93.39
355 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.25
356 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.31
357 TestNetworkPlugins/group/custom-flannel/DNS 0.22
358 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
359 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
360 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
361 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
362 TestNetworkPlugins/group/kindnet/NetCatPod 12.32
363 TestNetworkPlugins/group/enable-default-cni/Start 81.08
364 TestNetworkPlugins/group/kindnet/DNS 0.21
365 TestNetworkPlugins/group/kindnet/Localhost 0.14
366 TestNetworkPlugins/group/kindnet/HairPin 0.16
367 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 13.01
368 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.08
369 TestNetworkPlugins/group/bridge/Start 87.33
370 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
371 TestStartStop/group/no-preload/serial/Pause 3.29
372 TestNetworkPlugins/group/flannel/ControllerPod 6.01
373 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
374 TestNetworkPlugins/group/flannel/NetCatPod 11.47
375 TestNetworkPlugins/group/flannel/DNS 0.15
376 TestNetworkPlugins/group/flannel/Localhost 0.13
377 TestNetworkPlugins/group/flannel/HairPin 0.13
378 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
379 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.24
380 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
381 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
382 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
383 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
384 TestNetworkPlugins/group/bridge/NetCatPod 9.26
385 TestNetworkPlugins/group/bridge/DNS 0.15
386 TestNetworkPlugins/group/bridge/Localhost 0.12
387 TestNetworkPlugins/group/bridge/HairPin 0.23
x
+
TestDownloadOnly/v1.28.0/json-events (7.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-005088 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-005088 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (7.052893397s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1018 08:29:27.451361    9956 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1018 08:29:27.451462    9956 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-6063/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-005088
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-005088: exit status 85 (66.970996ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-005088 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-005088 │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 08:29:20
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 08:29:20.442220    9968 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:29:20.442481    9968 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:29:20.442491    9968 out.go:374] Setting ErrFile to fd 2...
	I1018 08:29:20.442496    9968 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:29:20.442720    9968 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-6063/.minikube/bin
	W1018 08:29:20.442886    9968 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21767-6063/.minikube/config/config.json: open /home/jenkins/minikube-integration/21767-6063/.minikube/config/config.json: no such file or directory
	I1018 08:29:20.443465    9968 out.go:368] Setting JSON to true
	I1018 08:29:20.444514    9968 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":710,"bootTime":1760775450,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 08:29:20.444625    9968 start.go:141] virtualization: kvm guest
	I1018 08:29:20.447267    9968 out.go:99] [download-only-005088] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1018 08:29:20.447473    9968 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21767-6063/.minikube/cache/preloaded-tarball: no such file or directory
	I1018 08:29:20.447487    9968 notify.go:220] Checking for updates...
	I1018 08:29:20.449429    9968 out.go:171] MINIKUBE_LOCATION=21767
	I1018 08:29:20.451461    9968 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 08:29:20.453306    9968 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21767-6063/kubeconfig
	I1018 08:29:20.455103    9968 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-6063/.minikube
	I1018 08:29:20.456745    9968 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1018 08:29:20.459672    9968 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1018 08:29:20.460040    9968 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 08:29:21.019681    9968 out.go:99] Using the kvm2 driver based on user configuration
	I1018 08:29:21.019717    9968 start.go:305] selected driver: kvm2
	I1018 08:29:21.019727    9968 start.go:925] validating driver "kvm2" against <nil>
	I1018 08:29:21.020219    9968 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 08:29:21.020420    9968 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21767-6063/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 08:29:21.037214    9968 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 08:29:21.037249    9968 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21767-6063/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 08:29:21.053157    9968 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 08:29:21.053211    9968 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 08:29:21.053737    9968 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1018 08:29:21.053896    9968 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1018 08:29:21.053930    9968 cni.go:84] Creating CNI manager for ""
	I1018 08:29:21.053988    9968 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 08:29:21.054001    9968 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1018 08:29:21.054069    9968 start.go:349] cluster config:
	{Name:download-only-005088 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-005088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 08:29:21.054291    9968 iso.go:125] acquiring lock: {Name:mk5e486e8f05c541fb7f7e8ec869cafc091f385a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 08:29:21.055883    9968 out.go:99] Downloading VM boot image ...
	I1018 08:29:21.055936    9968 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21767-6063/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1018 08:29:23.925989    9968 out.go:99] Starting "download-only-005088" primary control-plane node in "download-only-005088" cluster
	I1018 08:29:23.926039    9968 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 08:29:23.945413    9968 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1018 08:29:23.945460    9968 cache.go:58] Caching tarball of preloaded images
	I1018 08:29:23.945668    9968 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 08:29:23.947882    9968 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1018 08:29:23.947915    9968 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1018 08:29:23.968982    9968 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1018 08:29:23.969143    9968 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21767-6063/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-005088 host does not exist
	  To start a cluster, run: "minikube start -p download-only-005088"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-005088
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.71s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-127464 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-127464 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (3.71058493s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.71s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1018 08:29:31.537156    9956 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1018 08:29:31.537198    9956 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-6063/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-127464
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-127464: exit status 85 (64.748278ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-005088 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-005088 │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                               │ minikube             │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │ 18 Oct 25 08:29 UTC │
	│ delete  │ -p download-only-005088                                                                                                                                                                             │ download-only-005088 │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │ 18 Oct 25 08:29 UTC │
	│ start   │ -o=json --download-only -p download-only-127464 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-127464 │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 08:29:27
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 08:29:27.869322   10188 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:29:27.869582   10188 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:29:27.869593   10188 out.go:374] Setting ErrFile to fd 2...
	I1018 08:29:27.869597   10188 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:29:27.869776   10188 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-6063/.minikube/bin
	I1018 08:29:27.870290   10188 out.go:368] Setting JSON to true
	I1018 08:29:27.871097   10188 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":718,"bootTime":1760775450,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 08:29:27.871185   10188 start.go:141] virtualization: kvm guest
	I1018 08:29:27.873660   10188 out.go:99] [download-only-127464] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 08:29:27.873892   10188 notify.go:220] Checking for updates...
	I1018 08:29:27.875804   10188 out.go:171] MINIKUBE_LOCATION=21767
	I1018 08:29:27.877809   10188 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 08:29:27.879218   10188 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21767-6063/kubeconfig
	I1018 08:29:27.881435   10188 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-6063/.minikube
	I1018 08:29:27.883231   10188 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-127464 host does not exist
	  To start a cluster, run: "minikube start -p download-only-127464"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-127464
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.67s)

                                                
                                                
=== RUN   TestBinaryMirror
I1018 08:29:32.166379    9956 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-609442 --alsologtostderr --binary-mirror http://127.0.0.1:45099 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
helpers_test.go:175: Cleaning up "binary-mirror-609442" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-609442
--- PASS: TestBinaryMirror (0.67s)

                                                
                                    
x
+
TestOffline (54.35s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-879720 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-879720 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (53.482206567s)
helpers_test.go:175: Cleaning up "offline-crio-879720" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-879720
--- PASS: TestOffline (54.35s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-493204
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-493204: exit status 85 (54.245781ms)

                                                
                                                
-- stdout --
	* Profile "addons-493204" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-493204"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-493204
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-493204: exit status 85 (53.208072ms)

                                                
                                                
-- stdout --
	* Profile "addons-493204" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-493204"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (196.99s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-493204 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-493204 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m16.992896629s)
--- PASS: TestAddons/Setup (196.99s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-493204 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-493204 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.52s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-493204 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-493204 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e555c26b-13ad-4fca-a7c2-7ac393455c96] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e555c26b-13ad-4fca-a7c2-7ac393455c96] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.005158306s
addons_test.go:694: (dbg) Run:  kubectl --context addons-493204 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-493204 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-493204 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.52s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 8.034998ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-ctwfn" [4e1f70c3-697e-4014-9d2b-0602e2573195] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005846258s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-cs7st" [898b40c1-958b-4164-9267-77e9d0cc0574] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0035606s
addons_test.go:392: (dbg) Run:  kubectl --context addons-493204 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-493204 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-493204 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (8.042073933s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-493204 ip
2025/10/18 08:33:27 [DEBUG] GET http://192.168.39.58:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-493204 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (19.01s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.74s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.299347ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-493204
addons_test.go:332: (dbg) Run:  kubectl --context addons-493204 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-493204 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.74s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.37s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-59848" [4a4830ab-b635-43b5-9719-3dfef197e8df] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00606054s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-493204 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.37s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.39s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 9.453853ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-6pwgl" [5d4e0aeb-2bc1-4b5e-8f4e-b85bd15cd66f] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004765493s
addons_test.go:463: (dbg) Run:  kubectl --context addons-493204 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-493204 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-493204 addons disable metrics-server --alsologtostderr -v=1: (1.307663045s)
--- PASS: TestAddons/parallel/MetricsServer (6.39s)

                                                
                                    
x
+
TestAddons/parallel/CSI (54.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1018 08:33:28.430603    9956 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1018 08:33:28.435048    9956 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1018 08:33:28.435079    9956 kapi.go:107] duration metric: took 4.491211ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.505717ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-493204 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493204 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493204 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493204 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493204 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493204 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493204 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493204 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-493204 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [adca2eae-992f-4011-8d72-26284a2552ee] Pending
helpers_test.go:352: "task-pv-pod" [adca2eae-992f-4011-8d72-26284a2552ee] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [adca2eae-992f-4011-8d72-26284a2552ee] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.011026212s
addons_test.go:572: (dbg) Run:  kubectl --context addons-493204 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-493204 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-493204 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-493204 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-493204 delete pod task-pv-pod: (1.466148221s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-493204 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-493204 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493204 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493204 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493204 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493204 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493204 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493204 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493204 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493204 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493204 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493204 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493204 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493204 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493204 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493204 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493204 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493204 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-493204 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [4307a127-2fe2-45d8-9096-01add6f427f1] Pending
helpers_test.go:352: "task-pv-pod-restore" [4307a127-2fe2-45d8-9096-01add6f427f1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [4307a127-2fe2-45d8-9096-01add6f427f1] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.005780147s
addons_test.go:614: (dbg) Run:  kubectl --context addons-493204 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-493204 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-493204 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-493204 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-493204 addons disable volumesnapshots --alsologtostderr -v=1: (1.005367464s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-493204 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-493204 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.062056313s)
--- PASS: TestAddons/parallel/CSI (54.58s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (22.05s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-493204 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-493204 --alsologtostderr -v=1: (1.061581599s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-52zx9" [410e90c4-e847-45f4-b668-7a45514ee8aa] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-52zx9" [410e90c4-e847-45f4-b668-7a45514ee8aa] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.005707959s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-493204 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-493204 addons disable headlamp --alsologtostderr -v=1: (5.976956737s)
--- PASS: TestAddons/parallel/Headlamp (22.05s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.64s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-bqhbr" [8ad77bd9-d292-4bc5-aaf2-8b2745824d87] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.008114039s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-493204 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.64s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.98s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-493204 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-493204 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493204 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493204 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493204 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493204 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493204 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493204 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493204 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-493204 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [c7f81e73-a7a1-484f-b166-de1f82ee0dcd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [c7f81e73-a7a1-484f-b166-de1f82ee0dcd] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [c7f81e73-a7a1-484f-b166-de1f82ee0dcd] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.008029819s
addons_test.go:967: (dbg) Run:  kubectl --context addons-493204 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-493204 ssh "cat /opt/local-path-provisioner/pvc-133026bd-3661-4364-b6b1-3e3ca819e2f7_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-493204 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-493204 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-493204 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-493204 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.148353329s)
--- PASS: TestAddons/parallel/LocalPath (55.98s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.69s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-5crv7" [a4fe09fc-685f-4b43-959e-871a22fdb4c5] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.018197114s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-493204 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.69s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-4smd4" [fde9862b-4031-4f33-a8ad-06f7b4fadb51] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003971483s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-493204 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-493204 addons disable yakd --alsologtostderr -v=1: (5.985681561s)
--- PASS: TestAddons/parallel/Yakd (11.99s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (85.93s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-493204
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-493204: (1m25.648916565s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-493204
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-493204
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-493204
--- PASS: TestAddons/StoppedEnableDisable (85.93s)

                                                
                                    
x
+
TestCertOptions (58.03s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-586276 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 09:37:50.583630    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-586276 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (56.590433543s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-586276 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-586276 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-586276 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-586276" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-586276
--- PASS: TestCertOptions (58.03s)

                                                
                                    
x
+
TestCertExpiration (277.63s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-209551 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-209551 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (55.009809455s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-209551 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-209551 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (41.604523775s)
helpers_test.go:175: Cleaning up "cert-expiration-209551" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-209551
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-209551: (1.009778766s)
--- PASS: TestCertExpiration (277.63s)

                                                
                                    
x
+
TestForceSystemdFlag (43.37s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-850953 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-850953 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (42.250901501s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-850953 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-850953" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-850953
--- PASS: TestForceSystemdFlag (43.37s)

                                                
                                    
x
+
TestForceSystemdEnv (68.3s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-251727 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-251727 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m7.327692916s)
helpers_test.go:175: Cleaning up "force-systemd-env-251727" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-251727
--- PASS: TestForceSystemdEnv (68.30s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0.82s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1018 09:35:37.824248    9956 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1018 09:35:37.824557    9956 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate78100573/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1018 09:35:37.853452    9956 install.go:163] /tmp/TestKVMDriverInstallOrUpdate78100573/001/docker-machine-driver-kvm2 version is 1.1.1
W1018 09:35:37.853514    9956 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W1018 09:35:37.853671    9956 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1018 09:35:37.853742    9956 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate78100573/001/docker-machine-driver-kvm2
I1018 09:35:38.500556    9956 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate78100573/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1018 09:35:38.525565    9956 install.go:163] /tmp/TestKVMDriverInstallOrUpdate78100573/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (0.82s)

                                                
                                    
x
+
TestErrorSpam/setup (38.13s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-291393 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-291393 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 08:37:50.583219    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:37:50.592804    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:37:50.605076    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:37:50.626590    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:37:50.668124    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:37:50.749687    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:37:50.911335    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:37:51.233033    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:37:51.875174    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:37:53.157140    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:37:55.720075    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:38:00.841596    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-291393 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-291393 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (38.125339577s)
--- PASS: TestErrorSpam/setup (38.13s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-291393 --log_dir /tmp/nospam-291393 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-291393 --log_dir /tmp/nospam-291393 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-291393 --log_dir /tmp/nospam-291393 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-291393 --log_dir /tmp/nospam-291393 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-291393 --log_dir /tmp/nospam-291393 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-291393 --log_dir /tmp/nospam-291393 status
--- PASS: TestErrorSpam/status (0.80s)

                                                
                                    
x
+
TestErrorSpam/pause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-291393 --log_dir /tmp/nospam-291393 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-291393 --log_dir /tmp/nospam-291393 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-291393 --log_dir /tmp/nospam-291393 pause
--- PASS: TestErrorSpam/pause (1.69s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.96s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-291393 --log_dir /tmp/nospam-291393 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-291393 --log_dir /tmp/nospam-291393 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-291393 --log_dir /tmp/nospam-291393 unpause
--- PASS: TestErrorSpam/unpause (1.96s)

                                                
                                    
x
+
TestErrorSpam/stop (5.2s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-291393 --log_dir /tmp/nospam-291393 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-291393 --log_dir /tmp/nospam-291393 stop: (1.955367072s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-291393 --log_dir /tmp/nospam-291393 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-291393 --log_dir /tmp/nospam-291393 stop: (2.047399613s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-291393 --log_dir /tmp/nospam-291393 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-291393 --log_dir /tmp/nospam-291393 stop: (1.198147858s)
--- PASS: TestErrorSpam/stop (5.20s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21767-6063/.minikube/files/etc/test/nested/copy/9956/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (85.99s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-679071 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 08:38:31.565169    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:39:12.528167    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-679071 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m25.986413219s)
--- PASS: TestFunctional/serial/StartWithProxy (85.99s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (156.58s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1018 08:39:37.487152    9956 config.go:182] Loaded profile config "functional-679071": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-679071 --alsologtostderr -v=8
E1018 08:40:34.449956    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-679071 --alsologtostderr -v=8: (2m36.57558323s)
functional_test.go:678: soft start took 2m36.576287502s for "functional-679071" cluster.
I1018 08:42:14.063120    9956 config.go:182] Loaded profile config "functional-679071": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (156.58s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-679071 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-679071 cache add registry.k8s.io/pause:3.1: (1.109739796s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-679071 cache add registry.k8s.io/pause:3.3: (1.135274324s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-679071 cache add registry.k8s.io/pause:latest: (1.109931766s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.98s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-679071 /tmp/TestFunctionalserialCacheCmdcacheadd_local2862204165/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 cache add minikube-local-cache-test:functional-679071
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-679071 cache add minikube-local-cache-test:functional-679071: (1.59947267s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 cache delete minikube-local-cache-test:functional-679071
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-679071
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.98s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.77s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-679071 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (221.39014ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-679071 cache reload: (1.039449344s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.77s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 kubectl -- --context functional-679071 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-679071 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (372.45s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-679071 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1018 08:42:50.592323    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:43:18.298826    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:47:50.592134    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-679071 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (6m12.450399488s)
functional_test.go:776: restart took 6m12.450561567s for "functional-679071" cluster.
I1018 08:48:34.398772    9956 config.go:182] Loaded profile config "functional-679071": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (372.45s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-679071 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-679071 logs: (1.478546951s)
--- PASS: TestFunctional/serial/LogsCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 logs --file /tmp/TestFunctionalserialLogsFileCmd858954436/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-679071 logs --file /tmp/TestFunctionalserialLogsFileCmd858954436/001/logs.txt: (1.457955758s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.42s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-679071 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-679071
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-679071: exit status 115 (295.153839ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.157:30404 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-679071 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.42s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-679071 config get cpus: exit status 14 (52.495402ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-679071 config get cpus: exit status 14 (51.800992ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (44.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-679071 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-679071 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 19927: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (44.73s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-679071 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-679071 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (149.989067ms)

                                                
                                                
-- stdout --
	* [functional-679071] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-6063/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-6063/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:48:54.985408   19766 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:48:54.986147   19766 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:48:54.986168   19766 out.go:374] Setting ErrFile to fd 2...
	I1018 08:48:54.986175   19766 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:48:54.986501   19766 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-6063/.minikube/bin
	I1018 08:48:54.987173   19766 out.go:368] Setting JSON to false
	I1018 08:48:54.988547   19766 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1885,"bootTime":1760775450,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 08:48:54.988662   19766 start.go:141] virtualization: kvm guest
	I1018 08:48:54.990768   19766 out.go:179] * [functional-679071] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 08:48:54.992166   19766 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 08:48:54.992214   19766 notify.go:220] Checking for updates...
	I1018 08:48:54.994941   19766 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 08:48:54.996503   19766 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-6063/kubeconfig
	I1018 08:48:54.997767   19766 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-6063/.minikube
	I1018 08:48:54.999159   19766 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 08:48:55.000533   19766 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 08:48:55.002386   19766 config.go:182] Loaded profile config "functional-679071": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:48:55.002897   19766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:48:55.002959   19766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:48:55.018036   19766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42769
	I1018 08:48:55.018673   19766 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:48:55.019352   19766 main.go:141] libmachine: Using API Version  1
	I1018 08:48:55.019367   19766 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:48:55.019807   19766 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:48:55.020076   19766 main.go:141] libmachine: (functional-679071) Calling .DriverName
	I1018 08:48:55.020398   19766 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 08:48:55.020861   19766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:48:55.020915   19766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:48:55.035320   19766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46063
	I1018 08:48:55.035745   19766 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:48:55.036230   19766 main.go:141] libmachine: Using API Version  1
	I1018 08:48:55.036256   19766 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:48:55.036796   19766 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:48:55.037038   19766 main.go:141] libmachine: (functional-679071) Calling .DriverName
	I1018 08:48:55.072815   19766 out.go:179] * Using the kvm2 driver based on existing profile
	I1018 08:48:55.074472   19766 start.go:305] selected driver: kvm2
	I1018 08:48:55.074500   19766 start.go:925] validating driver "kvm2" against &{Name:functional-679071 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-679071 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.157 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 08:48:55.074640   19766 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 08:48:55.077302   19766 out.go:203] 
	W1018 08:48:55.078432   19766 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1018 08:48:55.079494   19766 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-679071 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
--- PASS: TestFunctional/parallel/DryRun (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-679071 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-679071 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (162.50764ms)

                                                
                                                
-- stdout --
	* [functional-679071] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-6063/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-6063/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:48:55.289643   19855 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:48:55.289913   19855 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:48:55.289940   19855 out.go:374] Setting ErrFile to fd 2...
	I1018 08:48:55.289950   19855 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:48:55.290272   19855 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-6063/.minikube/bin
	I1018 08:48:55.290706   19855 out.go:368] Setting JSON to false
	I1018 08:48:55.291609   19855 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1885,"bootTime":1760775450,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 08:48:55.291731   19855 start.go:141] virtualization: kvm guest
	I1018 08:48:55.293684   19855 out.go:179] * [functional-679071] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1018 08:48:55.295487   19855 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 08:48:55.295506   19855 notify.go:220] Checking for updates...
	I1018 08:48:55.298104   19855 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 08:48:55.299341   19855 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-6063/kubeconfig
	I1018 08:48:55.300627   19855 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-6063/.minikube
	I1018 08:48:55.302232   19855 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 08:48:55.304023   19855 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 08:48:55.306074   19855 config.go:182] Loaded profile config "functional-679071": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:48:55.306654   19855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:48:55.306737   19855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:48:55.328332   19855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40477
	I1018 08:48:55.329117   19855 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:48:55.329728   19855 main.go:141] libmachine: Using API Version  1
	I1018 08:48:55.329758   19855 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:48:55.330420   19855 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:48:55.330626   19855 main.go:141] libmachine: (functional-679071) Calling .DriverName
	I1018 08:48:55.330881   19855 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 08:48:55.331321   19855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:48:55.331372   19855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:48:55.347626   19855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39445
	I1018 08:48:55.348130   19855 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:48:55.348721   19855 main.go:141] libmachine: Using API Version  1
	I1018 08:48:55.348749   19855 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:48:55.349283   19855 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:48:55.349553   19855 main.go:141] libmachine: (functional-679071) Calling .DriverName
	I1018 08:48:55.389146   19855 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1018 08:48:55.390562   19855 start.go:305] selected driver: kvm2
	I1018 08:48:55.391147   19855 start.go:925] validating driver "kvm2" against &{Name:functional-679071 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-679071 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.157 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 08:48:55.391307   19855 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 08:48:55.393560   19855 out.go:203] 
	W1018 08:48:55.395163   19855 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1018 08:48:55.396629   19855 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-679071 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-679071 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-2tthf" [ceb5340a-075d-4640-88a1-8dd4adcf072e] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-2tthf" [ceb5340a-075d-4640-88a1-8dd4adcf072e] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.004092252s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.157:32202
functional_test.go:1680: http://192.168.39.157:32202: success! body:
Request served by hello-node-connect-7d85dfc575-2tthf

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.157:32202
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.58s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (48.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [568516af-3537-4d3f-802f-05cf17d64120] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004555184s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-679071 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-679071 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-679071 get pvc myclaim -o=json
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-679071 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-679071 apply -f testdata/storage-provisioner/pod.yaml
I1018 08:48:51.094861    9956 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [4a62b0e4-e228-4239-a2dd-902dd80e61f9] Pending
helpers_test.go:352: "sp-pod" [4a62b0e4-e228-4239-a2dd-902dd80e61f9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [4a62b0e4-e228-4239-a2dd-902dd80e61f9] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 26.005410355s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-679071 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-679071 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-679071 delete -f testdata/storage-provisioner/pod.yaml: (1.284924378s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-679071 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [9b1daef2-b8e3-4998-8e40-a790a6d120ef] Pending
helpers_test.go:352: "sp-pod" [9b1daef2-b8e3-4998-8e40-a790a6d120ef] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [9b1daef2-b8e3-4998-8e40-a790a6d120ef] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004794224s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-679071 exec sp-pod -- ls /tmp/mount
2025/10/18 08:49:39 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (48.46s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 ssh -n functional-679071 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 cp functional-679071:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2493003417/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 ssh -n functional-679071 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 ssh -n functional-679071 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-679071 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-8w5bn" [9337f2a9-f607-4e9c-934b-fcc4235c7a96] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-8w5bn" [9337f2a9-f607-4e9c-934b-fcc4235c7a96] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.197832699s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-679071 exec mysql-5bb876957f-8w5bn -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-679071 exec mysql-5bb876957f-8w5bn -- mysql -ppassword -e "show databases;": exit status 1 (249.989487ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1018 08:49:09.771725    9956 retry.go:31] will retry after 974.222689ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-679071 exec mysql-5bb876957f-8w5bn -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-679071 exec mysql-5bb876957f-8w5bn -- mysql -ppassword -e "show databases;": exit status 1 (125.428265ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1018 08:49:10.871632    9956 retry.go:31] will retry after 1.054113256s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-679071 exec mysql-5bb876957f-8w5bn -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-679071 exec mysql-5bb876957f-8w5bn -- mysql -ppassword -e "show databases;": exit status 1 (118.726163ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1018 08:49:12.045565    9956 retry.go:31] will retry after 1.379089938s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-679071 exec mysql-5bb876957f-8w5bn -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.40s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/9956/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 ssh "sudo cat /etc/test/nested/copy/9956/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/9956.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 ssh "sudo cat /etc/ssl/certs/9956.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/9956.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 ssh "sudo cat /usr/share/ca-certificates/9956.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/99562.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 ssh "sudo cat /etc/ssl/certs/99562.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/99562.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 ssh "sudo cat /usr/share/ca-certificates/99562.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-679071 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-679071 ssh "sudo systemctl is-active docker": exit status 1 (226.460027ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-679071 ssh "sudo systemctl is-active containerd": exit status 1 (231.943107ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-679071 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-679071
localhost/kicbase/echo-server:functional-679071
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-679071 image ls --format short --alsologtostderr:
I1018 08:49:14.378865   20177 out.go:360] Setting OutFile to fd 1 ...
I1018 08:49:14.379241   20177 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 08:49:14.379255   20177 out.go:374] Setting ErrFile to fd 2...
I1018 08:49:14.379261   20177 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 08:49:14.379589   20177 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-6063/.minikube/bin
I1018 08:49:14.380325   20177 config.go:182] Loaded profile config "functional-679071": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 08:49:14.380421   20177 config.go:182] Loaded profile config "functional-679071": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 08:49:14.380765   20177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 08:49:14.380844   20177 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 08:49:14.395454   20177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39323
I1018 08:49:14.395964   20177 main.go:141] libmachine: () Calling .GetVersion
I1018 08:49:14.396574   20177 main.go:141] libmachine: Using API Version  1
I1018 08:49:14.396607   20177 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 08:49:14.397034   20177 main.go:141] libmachine: () Calling .GetMachineName
I1018 08:49:14.397296   20177 main.go:141] libmachine: (functional-679071) Calling .GetState
I1018 08:49:14.399450   20177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 08:49:14.399498   20177 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 08:49:14.413511   20177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39921
I1018 08:49:14.414001   20177 main.go:141] libmachine: () Calling .GetVersion
I1018 08:49:14.414482   20177 main.go:141] libmachine: Using API Version  1
I1018 08:49:14.414502   20177 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 08:49:14.414850   20177 main.go:141] libmachine: () Calling .GetMachineName
I1018 08:49:14.415118   20177 main.go:141] libmachine: (functional-679071) Calling .DriverName
I1018 08:49:14.415342   20177 ssh_runner.go:195] Run: systemctl --version
I1018 08:49:14.415377   20177 main.go:141] libmachine: (functional-679071) Calling .GetSSHHostname
I1018 08:49:14.418411   20177 main.go:141] libmachine: (functional-679071) DBG | domain functional-679071 has defined MAC address 52:54:00:32:cf:ae in network mk-functional-679071
I1018 08:49:14.419000   20177 main.go:141] libmachine: (functional-679071) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:cf:ae", ip: ""} in network mk-functional-679071: {Iface:virbr1 ExpiryTime:2025-10-18 09:38:26 +0000 UTC Type:0 Mac:52:54:00:32:cf:ae Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:functional-679071 Clientid:01:52:54:00:32:cf:ae}
I1018 08:49:14.419040   20177 main.go:141] libmachine: (functional-679071) DBG | domain functional-679071 has defined IP address 192.168.39.157 and MAC address 52:54:00:32:cf:ae in network mk-functional-679071
I1018 08:49:14.419221   20177 main.go:141] libmachine: (functional-679071) Calling .GetSSHPort
I1018 08:49:14.419408   20177 main.go:141] libmachine: (functional-679071) Calling .GetSSHKeyPath
I1018 08:49:14.419592   20177 main.go:141] libmachine: (functional-679071) Calling .GetSSHUsername
I1018 08:49:14.419751   20177 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/functional-679071/id_rsa Username:docker}
I1018 08:49:14.510472   20177 ssh_runner.go:195] Run: sudo crictl images --output json
I1018 08:49:14.576515   20177 main.go:141] libmachine: Making call to close driver server
I1018 08:49:14.576531   20177 main.go:141] libmachine: (functional-679071) Calling .Close
I1018 08:49:14.576798   20177 main.go:141] libmachine: Successfully made call to close driver server
I1018 08:49:14.576815   20177 main.go:141] libmachine: Making call to close connection to plugin binary
I1018 08:49:14.576824   20177 main.go:141] libmachine: Making call to close driver server
I1018 08:49:14.576831   20177 main.go:141] libmachine: (functional-679071) Calling .Close
I1018 08:49:14.577101   20177 main.go:141] libmachine: Successfully made call to close driver server
I1018 08:49:14.577116   20177 main.go:141] libmachine: Making call to close connection to plugin binary
I1018 08:49:14.577132   20177 main.go:141] libmachine: (functional-679071) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-679071 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/nginx                 │ latest             │ 07ccdb7838758 │ 164MB  │
│ localhost/minikube-local-cache-test     │ functional-679071  │ ebdf8c21b2e84 │ 3.33kB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-679071  │ 9056ab77afb8e │ 4.94MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/my-image                      │ functional-679071  │ 692232162aa27 │ 1.47MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-679071 image ls --format table --alsologtostderr:
I1018 08:49:22.331737   20805 out.go:360] Setting OutFile to fd 1 ...
I1018 08:49:22.332018   20805 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 08:49:22.332028   20805 out.go:374] Setting ErrFile to fd 2...
I1018 08:49:22.332035   20805 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 08:49:22.332235   20805 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-6063/.minikube/bin
I1018 08:49:22.332853   20805 config.go:182] Loaded profile config "functional-679071": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 08:49:22.332956   20805 config.go:182] Loaded profile config "functional-679071": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 08:49:22.333337   20805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 08:49:22.333394   20805 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 08:49:22.349385   20805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46575
I1018 08:49:22.350081   20805 main.go:141] libmachine: () Calling .GetVersion
I1018 08:49:22.350681   20805 main.go:141] libmachine: Using API Version  1
I1018 08:49:22.350711   20805 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 08:49:22.351130   20805 main.go:141] libmachine: () Calling .GetMachineName
I1018 08:49:22.351365   20805 main.go:141] libmachine: (functional-679071) Calling .GetState
I1018 08:49:22.353503   20805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 08:49:22.353555   20805 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 08:49:22.368065   20805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39103
I1018 08:49:22.368639   20805 main.go:141] libmachine: () Calling .GetVersion
I1018 08:49:22.369260   20805 main.go:141] libmachine: Using API Version  1
I1018 08:49:22.369296   20805 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 08:49:22.369643   20805 main.go:141] libmachine: () Calling .GetMachineName
I1018 08:49:22.369856   20805 main.go:141] libmachine: (functional-679071) Calling .DriverName
I1018 08:49:22.370102   20805 ssh_runner.go:195] Run: systemctl --version
I1018 08:49:22.370132   20805 main.go:141] libmachine: (functional-679071) Calling .GetSSHHostname
I1018 08:49:22.374485   20805 main.go:141] libmachine: (functional-679071) DBG | domain functional-679071 has defined MAC address 52:54:00:32:cf:ae in network mk-functional-679071
I1018 08:49:22.375019   20805 main.go:141] libmachine: (functional-679071) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:cf:ae", ip: ""} in network mk-functional-679071: {Iface:virbr1 ExpiryTime:2025-10-18 09:38:26 +0000 UTC Type:0 Mac:52:54:00:32:cf:ae Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:functional-679071 Clientid:01:52:54:00:32:cf:ae}
I1018 08:49:22.375049   20805 main.go:141] libmachine: (functional-679071) DBG | domain functional-679071 has defined IP address 192.168.39.157 and MAC address 52:54:00:32:cf:ae in network mk-functional-679071
I1018 08:49:22.375282   20805 main.go:141] libmachine: (functional-679071) Calling .GetSSHPort
I1018 08:49:22.375516   20805 main.go:141] libmachine: (functional-679071) Calling .GetSSHKeyPath
I1018 08:49:22.375696   20805 main.go:141] libmachine: (functional-679071) Calling .GetSSHUsername
I1018 08:49:22.375879   20805 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/functional-679071/id_rsa Username:docker}
I1018 08:49:22.483809   20805 ssh_runner.go:195] Run: sudo crictl images --output json
I1018 08:49:22.554307   20805 main.go:141] libmachine: Making call to close driver server
I1018 08:49:22.554325   20805 main.go:141] libmachine: (functional-679071) Calling .Close
I1018 08:49:22.554712   20805 main.go:141] libmachine: (functional-679071) DBG | Closing plugin on server side
I1018 08:49:22.554815   20805 main.go:141] libmachine: Successfully made call to close driver server
I1018 08:49:22.554834   20805 main.go:141] libmachine: Making call to close connection to plugin binary
I1018 08:49:22.554851   20805 main.go:141] libmachine: Making call to close driver server
I1018 08:49:22.554863   20805 main.go:141] libmachine: (functional-679071) Calling .Close
I1018 08:49:22.555141   20805 main.go:141] libmachine: Successfully made call to close driver server
I1018 08:49:22.555165   20805 main.go:141] libmachine: Making call to close connection to plugin binary
I1018 08:49:22.555189   20805 main.go:141] libmachine: (functional-679071) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-679071 image ls --format json --alsologtostderr:
[{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938","repoDigests":["docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115","docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6"],"repoTags":["docker.io/library/nginx:latest"],"size":"163615579"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c6
6c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry
.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f2
4d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"692232162aa27f6ff959af4eaef4a29d82edbd6c5df306fcfe798503bd13f9da","repoDigests":["localhost/my-image@sha256:0196be4c579c2ea1ec4964cc4d0f3edbf86584d3e9f2149e4a9e3c5a505bff90"],"repoTags":["localhost/my-image:functional-679071"],"size":"1468600"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b89
9ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-679071"],"size":"4943877"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92
e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"970365864eefb5a53172542a148636cfa6f9e32f0ee5e203edb35ff0cff463b6","repoDigests":["docker.io/library/d31192d8f3649621b655a608743b1588b743daefe3ee29549dec38e0aa094ab7-tmp@sha256:042add30320dc981b9f1b797575fe3c9fa65d9475d92de23c5916c7ffe1e237d"],"repoTags":[],"size":"1466018"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"ebdf8c21b2e84dfdb8e5e68f1ed06947241d8d5953082b00047d22086e4cad34","repoDigests":["localhost/minikube-local-cache-test@sha256:8517ba3cdf2f7f28ba764334e28e250dfdceeb44a6dd15f997c03866ee844c0a"],"repoTags":["localhost/minikube-local-cache-test:functional-67
9071"],"size":"3330"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-679071 image ls --format json --alsologtostderr:
I1018 08:49:21.607067   20703 out.go:360] Setting OutFile to fd 1 ...
I1018 08:49:21.607353   20703 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 08:49:21.607364   20703 out.go:374] Setting ErrFile to fd 2...
I1018 08:49:21.607371   20703 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 08:49:21.607597   20703 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-6063/.minikube/bin
I1018 08:49:21.608290   20703 config.go:182] Loaded profile config "functional-679071": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 08:49:21.608411   20703 config.go:182] Loaded profile config "functional-679071": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 08:49:21.609543   20703 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 08:49:21.609708   20703 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 08:49:21.629821   20703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36867
I1018 08:49:21.630531   20703 main.go:141] libmachine: () Calling .GetVersion
I1018 08:49:21.631102   20703 main.go:141] libmachine: Using API Version  1
I1018 08:49:21.631122   20703 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 08:49:21.631441   20703 main.go:141] libmachine: () Calling .GetMachineName
I1018 08:49:21.631651   20703 main.go:141] libmachine: (functional-679071) Calling .GetState
I1018 08:49:21.634125   20703 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 08:49:21.634178   20703 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 08:49:21.648522   20703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33873
I1018 08:49:21.649066   20703 main.go:141] libmachine: () Calling .GetVersion
I1018 08:49:21.649711   20703 main.go:141] libmachine: Using API Version  1
I1018 08:49:21.649737   20703 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 08:49:21.650248   20703 main.go:141] libmachine: () Calling .GetMachineName
I1018 08:49:21.650456   20703 main.go:141] libmachine: (functional-679071) Calling .DriverName
I1018 08:49:21.650766   20703 ssh_runner.go:195] Run: systemctl --version
I1018 08:49:21.650796   20703 main.go:141] libmachine: (functional-679071) Calling .GetSSHHostname
I1018 08:49:21.654751   20703 main.go:141] libmachine: (functional-679071) DBG | domain functional-679071 has defined MAC address 52:54:00:32:cf:ae in network mk-functional-679071
I1018 08:49:21.655304   20703 main.go:141] libmachine: (functional-679071) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:cf:ae", ip: ""} in network mk-functional-679071: {Iface:virbr1 ExpiryTime:2025-10-18 09:38:26 +0000 UTC Type:0 Mac:52:54:00:32:cf:ae Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:functional-679071 Clientid:01:52:54:00:32:cf:ae}
I1018 08:49:21.655343   20703 main.go:141] libmachine: (functional-679071) DBG | domain functional-679071 has defined IP address 192.168.39.157 and MAC address 52:54:00:32:cf:ae in network mk-functional-679071
I1018 08:49:21.655507   20703 main.go:141] libmachine: (functional-679071) Calling .GetSSHPort
I1018 08:49:21.655716   20703 main.go:141] libmachine: (functional-679071) Calling .GetSSHKeyPath
I1018 08:49:21.655882   20703 main.go:141] libmachine: (functional-679071) Calling .GetSSHUsername
I1018 08:49:21.656075   20703 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/functional-679071/id_rsa Username:docker}
I1018 08:49:21.761255   20703 ssh_runner.go:195] Run: sudo crictl images --output json
I1018 08:49:22.275154   20703 main.go:141] libmachine: Making call to close driver server
I1018 08:49:22.275165   20703 main.go:141] libmachine: (functional-679071) Calling .Close
I1018 08:49:22.275398   20703 main.go:141] libmachine: Successfully made call to close driver server
I1018 08:49:22.275421   20703 main.go:141] libmachine: Making call to close connection to plugin binary
I1018 08:49:22.275431   20703 main.go:141] libmachine: Making call to close driver server
I1018 08:49:22.275441   20703 main.go:141] libmachine: (functional-679071) Calling .Close
I1018 08:49:22.275445   20703 main.go:141] libmachine: (functional-679071) DBG | Closing plugin on server side
I1018 08:49:22.275748   20703 main.go:141] libmachine: (functional-679071) DBG | Closing plugin on server side
I1018 08:49:22.275816   20703 main.go:141] libmachine: Successfully made call to close driver server
I1018 08:49:22.275839   20703 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-679071 image ls --format yaml --alsologtostderr:
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: ebdf8c21b2e84dfdb8e5e68f1ed06947241d8d5953082b00047d22086e4cad34
repoDigests:
- localhost/minikube-local-cache-test@sha256:8517ba3cdf2f7f28ba764334e28e250dfdceeb44a6dd15f997c03866ee844c0a
repoTags:
- localhost/minikube-local-cache-test:functional-679071
size: "3330"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-679071
size: "4943877"
- id: 07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938
repoDigests:
- docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115
- docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6
repoTags:
- docker.io/library/nginx:latest
size: "163615579"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-679071 image ls --format yaml --alsologtostderr:
I1018 08:49:14.629104   20201 out.go:360] Setting OutFile to fd 1 ...
I1018 08:49:14.629355   20201 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 08:49:14.629363   20201 out.go:374] Setting ErrFile to fd 2...
I1018 08:49:14.629367   20201 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 08:49:14.629571   20201 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-6063/.minikube/bin
I1018 08:49:14.630184   20201 config.go:182] Loaded profile config "functional-679071": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 08:49:14.630278   20201 config.go:182] Loaded profile config "functional-679071": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 08:49:14.630627   20201 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 08:49:14.630682   20201 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 08:49:14.647260   20201 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43915
I1018 08:49:14.647869   20201 main.go:141] libmachine: () Calling .GetVersion
I1018 08:49:14.648655   20201 main.go:141] libmachine: Using API Version  1
I1018 08:49:14.648686   20201 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 08:49:14.649161   20201 main.go:141] libmachine: () Calling .GetMachineName
I1018 08:49:14.649388   20201 main.go:141] libmachine: (functional-679071) Calling .GetState
I1018 08:49:14.651776   20201 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 08:49:14.651827   20201 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 08:49:14.668139   20201 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34907
I1018 08:49:14.668702   20201 main.go:141] libmachine: () Calling .GetVersion
I1018 08:49:14.669315   20201 main.go:141] libmachine: Using API Version  1
I1018 08:49:14.669370   20201 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 08:49:14.669787   20201 main.go:141] libmachine: () Calling .GetMachineName
I1018 08:49:14.670014   20201 main.go:141] libmachine: (functional-679071) Calling .DriverName
I1018 08:49:14.670294   20201 ssh_runner.go:195] Run: systemctl --version
I1018 08:49:14.670332   20201 main.go:141] libmachine: (functional-679071) Calling .GetSSHHostname
I1018 08:49:14.674741   20201 main.go:141] libmachine: (functional-679071) DBG | domain functional-679071 has defined MAC address 52:54:00:32:cf:ae in network mk-functional-679071
I1018 08:49:14.675223   20201 main.go:141] libmachine: (functional-679071) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:cf:ae", ip: ""} in network mk-functional-679071: {Iface:virbr1 ExpiryTime:2025-10-18 09:38:26 +0000 UTC Type:0 Mac:52:54:00:32:cf:ae Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:functional-679071 Clientid:01:52:54:00:32:cf:ae}
I1018 08:49:14.675268   20201 main.go:141] libmachine: (functional-679071) DBG | domain functional-679071 has defined IP address 192.168.39.157 and MAC address 52:54:00:32:cf:ae in network mk-functional-679071
I1018 08:49:14.675538   20201 main.go:141] libmachine: (functional-679071) Calling .GetSSHPort
I1018 08:49:14.675833   20201 main.go:141] libmachine: (functional-679071) Calling .GetSSHKeyPath
I1018 08:49:14.676060   20201 main.go:141] libmachine: (functional-679071) Calling .GetSSHUsername
I1018 08:49:14.676266   20201 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/functional-679071/id_rsa Username:docker}
I1018 08:49:14.779194   20201 ssh_runner.go:195] Run: sudo crictl images --output json
I1018 08:49:14.834171   20201 main.go:141] libmachine: Making call to close driver server
I1018 08:49:14.834183   20201 main.go:141] libmachine: (functional-679071) Calling .Close
I1018 08:49:14.834480   20201 main.go:141] libmachine: Successfully made call to close driver server
I1018 08:49:14.834499   20201 main.go:141] libmachine: Making call to close connection to plugin binary
I1018 08:49:14.834509   20201 main.go:141] libmachine: Making call to close driver server
I1018 08:49:14.834517   20201 main.go:141] libmachine: (functional-679071) Calling .Close
I1018 08:49:14.834787   20201 main.go:141] libmachine: (functional-679071) DBG | Closing plugin on server side
I1018 08:49:14.834820   20201 main.go:141] libmachine: Successfully made call to close driver server
I1018 08:49:14.834827   20201 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.475996768s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-679071
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-679071 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-679071 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-hqm79" [948a9dfa-8a0e-4525-8869-4ed72e5525c2] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-hqm79" [948a9dfa-8a0e-4525-8869-4ed72e5525c2] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.005249874s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 image load --daemon kicbase/echo-server:functional-679071 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-679071 image load --daemon kicbase/echo-server:functional-679071 --alsologtostderr: (1.303082561s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 image load --daemon kicbase/echo-server:functional-679071 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-679071
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 image load --daemon kicbase/echo-server:functional-679071 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 image save kicbase/echo-server:functional-679071 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 image rm kicbase/echo-server:functional-679071 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 image ls
I1018 08:48:48.724021    9956 retry.go:31] will retry after 2.167870306s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:c4d12253-9cd9-41c8-bc01-21ad62b273ec ResourceVersion:499 Generation:0 CreationTimestamp:2025-10-18 08:48:48 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001a96ac0 VolumeMode:0xc001a96ad0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-679071
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 image save --daemon kicbase/echo-server:functional-679071 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-679071
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 service list -o json
functional_test.go:1504: Took "471.849054ms" to run "out/minikube-linux-amd64 -p functional-679071 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.157:32228
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.157:32228
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "475.067478ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "57.260359ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (24.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-679071 /tmp/TestFunctionalparallelMountCmdany-port835397719/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760777333994801714" to /tmp/TestFunctionalparallelMountCmdany-port835397719/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760777333994801714" to /tmp/TestFunctionalparallelMountCmdany-port835397719/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760777333994801714" to /tmp/TestFunctionalparallelMountCmdany-port835397719/001/test-1760777333994801714
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-679071 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (239.890959ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1018 08:48:54.235124    9956 retry.go:31] will retry after 323.350673ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 18 08:48 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 18 08:48 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 18 08:48 test-1760777333994801714
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 ssh cat /mount-9p/test-1760777333994801714
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-679071 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [9a16cd0c-faf3-4486-93ca-27486ff62e5a] Pending
helpers_test.go:352: "busybox-mount" [9a16cd0c-faf3-4486-93ca-27486ff62e5a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [9a16cd0c-faf3-4486-93ca-27486ff62e5a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [9a16cd0c-faf3-4486-93ca-27486ff62e5a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 22.005753868s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-679071 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-679071 /tmp/TestFunctionalparallelMountCmdany-port835397719/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (24.70s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "401.831432ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "62.699123ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-679071 /tmp/TestFunctionalparallelMountCmdspecific-port1617567408/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 ssh "findmnt -T /mount-9p | grep 9p"
I1018 08:49:18.777150    9956 detect.go:223] nested VM detected
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-679071 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (287.884533ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1018 08:49:18.984625    9956 retry.go:31] will retry after 564.447354ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-679071 /tmp/TestFunctionalparallelMountCmdspecific-port1617567408/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-679071 ssh "sudo umount -f /mount-9p": exit status 1 (247.17806ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-679071 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-679071 /tmp/TestFunctionalparallelMountCmdspecific-port1617567408/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-679071 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2988170164/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-679071 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2988170164/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-679071 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2988170164/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-679071 ssh "findmnt -T" /mount1: exit status 1 (287.974874ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1018 08:49:20.923479    9956 retry.go:31] will retry after 601.339231ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-679071 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-679071 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-679071 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2988170164/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-679071 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2988170164/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-679071 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2988170164/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.82s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-679071
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-679071
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-679071
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (235.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 08:52:50.583614    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-083979 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (3m55.025649781s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (235.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-083979 kubectl -- rollout status deployment/busybox: (5.272181668s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 kubectl -- exec busybox-7b57f96db7-r88xp -- nslookup kubernetes.io
E1018 08:53:42.874908    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/functional-679071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:53:42.881408    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/functional-679071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:53:42.892898    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/functional-679071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:53:42.914423    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/functional-679071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:53:42.955885    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/functional-679071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 kubectl -- exec busybox-7b57f96db7-rr9rx -- nslookup kubernetes.io
E1018 08:53:43.037724    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/functional-679071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:53:43.199383    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/functional-679071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 kubectl -- exec busybox-7b57f96db7-skk2w -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 kubectl -- exec busybox-7b57f96db7-r88xp -- nslookup kubernetes.default
E1018 08:53:43.521291    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/functional-679071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 kubectl -- exec busybox-7b57f96db7-rr9rx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 kubectl -- exec busybox-7b57f96db7-skk2w -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 kubectl -- exec busybox-7b57f96db7-r88xp -- nslookup kubernetes.default.svc.cluster.local
E1018 08:53:44.163667    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/functional-679071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 kubectl -- exec busybox-7b57f96db7-rr9rx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 kubectl -- exec busybox-7b57f96db7-skk2w -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 kubectl -- exec busybox-7b57f96db7-r88xp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 kubectl -- exec busybox-7b57f96db7-r88xp -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 kubectl -- exec busybox-7b57f96db7-rr9rx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 kubectl -- exec busybox-7b57f96db7-rr9rx -- sh -c "ping -c 1 192.168.39.1"
E1018 08:53:45.445980    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/functional-679071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 kubectl -- exec busybox-7b57f96db7-skk2w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 kubectl -- exec busybox-7b57f96db7-skk2w -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (44.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 node add --alsologtostderr -v 5
E1018 08:53:48.007635    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/functional-679071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:53:53.129956    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/functional-679071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:54:03.372109    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/functional-679071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:54:13.661148    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:54:23.854436    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/functional-679071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-083979 node add --alsologtostderr -v 5: (43.653677592s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (44.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-083979 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 cp testdata/cp-test.txt ha-083979:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 ssh -n ha-083979 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 cp ha-083979:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4099285010/001/cp-test_ha-083979.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 ssh -n ha-083979 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 cp ha-083979:/home/docker/cp-test.txt ha-083979-m02:/home/docker/cp-test_ha-083979_ha-083979-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 ssh -n ha-083979 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 ssh -n ha-083979-m02 "sudo cat /home/docker/cp-test_ha-083979_ha-083979-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 cp ha-083979:/home/docker/cp-test.txt ha-083979-m03:/home/docker/cp-test_ha-083979_ha-083979-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 ssh -n ha-083979 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 ssh -n ha-083979-m03 "sudo cat /home/docker/cp-test_ha-083979_ha-083979-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 cp ha-083979:/home/docker/cp-test.txt ha-083979-m04:/home/docker/cp-test_ha-083979_ha-083979-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 ssh -n ha-083979 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 ssh -n ha-083979-m04 "sudo cat /home/docker/cp-test_ha-083979_ha-083979-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 cp testdata/cp-test.txt ha-083979-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 ssh -n ha-083979-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 cp ha-083979-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4099285010/001/cp-test_ha-083979-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 ssh -n ha-083979-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 cp ha-083979-m02:/home/docker/cp-test.txt ha-083979:/home/docker/cp-test_ha-083979-m02_ha-083979.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 ssh -n ha-083979-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 ssh -n ha-083979 "sudo cat /home/docker/cp-test_ha-083979-m02_ha-083979.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 cp ha-083979-m02:/home/docker/cp-test.txt ha-083979-m03:/home/docker/cp-test_ha-083979-m02_ha-083979-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 ssh -n ha-083979-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 ssh -n ha-083979-m03 "sudo cat /home/docker/cp-test_ha-083979-m02_ha-083979-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 cp ha-083979-m02:/home/docker/cp-test.txt ha-083979-m04:/home/docker/cp-test_ha-083979-m02_ha-083979-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 ssh -n ha-083979-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 ssh -n ha-083979-m04 "sudo cat /home/docker/cp-test_ha-083979-m02_ha-083979-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 cp testdata/cp-test.txt ha-083979-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 ssh -n ha-083979-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 cp ha-083979-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4099285010/001/cp-test_ha-083979-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 ssh -n ha-083979-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 cp ha-083979-m03:/home/docker/cp-test.txt ha-083979:/home/docker/cp-test_ha-083979-m03_ha-083979.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 ssh -n ha-083979-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 ssh -n ha-083979 "sudo cat /home/docker/cp-test_ha-083979-m03_ha-083979.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 cp ha-083979-m03:/home/docker/cp-test.txt ha-083979-m02:/home/docker/cp-test_ha-083979-m03_ha-083979-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 ssh -n ha-083979-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 ssh -n ha-083979-m02 "sudo cat /home/docker/cp-test_ha-083979-m03_ha-083979-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 cp ha-083979-m03:/home/docker/cp-test.txt ha-083979-m04:/home/docker/cp-test_ha-083979-m03_ha-083979-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 ssh -n ha-083979-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 ssh -n ha-083979-m04 "sudo cat /home/docker/cp-test_ha-083979-m03_ha-083979-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 cp testdata/cp-test.txt ha-083979-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 ssh -n ha-083979-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 cp ha-083979-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4099285010/001/cp-test_ha-083979-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 ssh -n ha-083979-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 cp ha-083979-m04:/home/docker/cp-test.txt ha-083979:/home/docker/cp-test_ha-083979-m04_ha-083979.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 ssh -n ha-083979-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 ssh -n ha-083979 "sudo cat /home/docker/cp-test_ha-083979-m04_ha-083979.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 cp ha-083979-m04:/home/docker/cp-test.txt ha-083979-m02:/home/docker/cp-test_ha-083979-m04_ha-083979-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 ssh -n ha-083979-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 ssh -n ha-083979-m02 "sudo cat /home/docker/cp-test_ha-083979-m04_ha-083979-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 cp ha-083979-m04:/home/docker/cp-test.txt ha-083979-m03:/home/docker/cp-test_ha-083979-m04_ha-083979-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 ssh -n ha-083979-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 ssh -n ha-083979-m03 "sudo cat /home/docker/cp-test_ha-083979-m04_ha-083979-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (82.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 node stop m02 --alsologtostderr -v 5
E1018 08:55:04.816952    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/functional-679071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-083979 node stop m02 --alsologtostderr -v 5: (1m22.032250368s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-083979 status --alsologtostderr -v 5: exit status 7 (693.964661ms)

                                                
                                                
-- stdout --
	ha-083979
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-083979-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-083979-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-083979-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:56:07.046192   25556 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:56:07.046370   25556 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:56:07.046384   25556 out.go:374] Setting ErrFile to fd 2...
	I1018 08:56:07.046392   25556 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:56:07.046630   25556 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-6063/.minikube/bin
	I1018 08:56:07.046806   25556 out.go:368] Setting JSON to false
	I1018 08:56:07.046830   25556 mustload.go:65] Loading cluster: ha-083979
	I1018 08:56:07.047171   25556 notify.go:220] Checking for updates...
	I1018 08:56:07.047209   25556 config.go:182] Loaded profile config "ha-083979": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:56:07.047220   25556 status.go:174] checking status of ha-083979 ...
	I1018 08:56:07.047648   25556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:07.047693   25556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:07.065222   25556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38769
	I1018 08:56:07.065772   25556 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:07.066525   25556 main.go:141] libmachine: Using API Version  1
	I1018 08:56:07.066546   25556 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:07.067217   25556 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:07.067430   25556 main.go:141] libmachine: (ha-083979) Calling .GetState
	I1018 08:56:07.070056   25556 status.go:371] ha-083979 host status = "Running" (err=<nil>)
	I1018 08:56:07.070081   25556 host.go:66] Checking if "ha-083979" exists ...
	I1018 08:56:07.070395   25556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:07.070439   25556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:07.084436   25556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33897
	I1018 08:56:07.085072   25556 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:07.085765   25556 main.go:141] libmachine: Using API Version  1
	I1018 08:56:07.085826   25556 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:07.086239   25556 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:07.086424   25556 main.go:141] libmachine: (ha-083979) Calling .GetIP
	I1018 08:56:07.090344   25556 main.go:141] libmachine: (ha-083979) DBG | domain ha-083979 has defined MAC address 52:54:00:a6:da:9a in network mk-ha-083979
	I1018 08:56:07.091000   25556 main.go:141] libmachine: (ha-083979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:da:9a", ip: ""} in network mk-ha-083979: {Iface:virbr1 ExpiryTime:2025-10-18 09:49:56 +0000 UTC Type:0 Mac:52:54:00:a6:da:9a Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-083979 Clientid:01:52:54:00:a6:da:9a}
	I1018 08:56:07.091047   25556 main.go:141] libmachine: (ha-083979) DBG | domain ha-083979 has defined IP address 192.168.39.250 and MAC address 52:54:00:a6:da:9a in network mk-ha-083979
	I1018 08:56:07.091195   25556 host.go:66] Checking if "ha-083979" exists ...
	I1018 08:56:07.091565   25556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:07.091614   25556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:07.105184   25556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38277
	I1018 08:56:07.105648   25556 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:07.106215   25556 main.go:141] libmachine: Using API Version  1
	I1018 08:56:07.106299   25556 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:07.106765   25556 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:07.107049   25556 main.go:141] libmachine: (ha-083979) Calling .DriverName
	I1018 08:56:07.107282   25556 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 08:56:07.107310   25556 main.go:141] libmachine: (ha-083979) Calling .GetSSHHostname
	I1018 08:56:07.111292   25556 main.go:141] libmachine: (ha-083979) DBG | domain ha-083979 has defined MAC address 52:54:00:a6:da:9a in network mk-ha-083979
	I1018 08:56:07.111886   25556 main.go:141] libmachine: (ha-083979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:da:9a", ip: ""} in network mk-ha-083979: {Iface:virbr1 ExpiryTime:2025-10-18 09:49:56 +0000 UTC Type:0 Mac:52:54:00:a6:da:9a Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-083979 Clientid:01:52:54:00:a6:da:9a}
	I1018 08:56:07.111937   25556 main.go:141] libmachine: (ha-083979) DBG | domain ha-083979 has defined IP address 192.168.39.250 and MAC address 52:54:00:a6:da:9a in network mk-ha-083979
	I1018 08:56:07.112163   25556 main.go:141] libmachine: (ha-083979) Calling .GetSSHPort
	I1018 08:56:07.112342   25556 main.go:141] libmachine: (ha-083979) Calling .GetSSHKeyPath
	I1018 08:56:07.112533   25556 main.go:141] libmachine: (ha-083979) Calling .GetSSHUsername
	I1018 08:56:07.112667   25556 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/ha-083979/id_rsa Username:docker}
	I1018 08:56:07.201801   25556 ssh_runner.go:195] Run: systemctl --version
	I1018 08:56:07.209659   25556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 08:56:07.229707   25556 kubeconfig.go:125] found "ha-083979" server: "https://192.168.39.254:8443"
	I1018 08:56:07.229742   25556 api_server.go:166] Checking apiserver status ...
	I1018 08:56:07.229777   25556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 08:56:07.251913   25556 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1377/cgroup
	W1018 08:56:07.265021   25556 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1377/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1018 08:56:07.265081   25556 ssh_runner.go:195] Run: ls
	I1018 08:56:07.271394   25556 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1018 08:56:07.277613   25556 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1018 08:56:07.277647   25556 status.go:463] ha-083979 apiserver status = Running (err=<nil>)
	I1018 08:56:07.277658   25556 status.go:176] ha-083979 status: &{Name:ha-083979 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 08:56:07.277676   25556 status.go:174] checking status of ha-083979-m02 ...
	I1018 08:56:07.278073   25556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:07.278116   25556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:07.293045   25556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40423
	I1018 08:56:07.293587   25556 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:07.294104   25556 main.go:141] libmachine: Using API Version  1
	I1018 08:56:07.294124   25556 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:07.294480   25556 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:07.294667   25556 main.go:141] libmachine: (ha-083979-m02) Calling .GetState
	I1018 08:56:07.296610   25556 status.go:371] ha-083979-m02 host status = "Stopped" (err=<nil>)
	I1018 08:56:07.296627   25556 status.go:384] host is not running, skipping remaining checks
	I1018 08:56:07.296632   25556 status.go:176] ha-083979-m02 status: &{Name:ha-083979-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 08:56:07.296648   25556 status.go:174] checking status of ha-083979-m03 ...
	I1018 08:56:07.296944   25556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:07.296998   25556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:07.312400   25556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34587
	I1018 08:56:07.312886   25556 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:07.313402   25556 main.go:141] libmachine: Using API Version  1
	I1018 08:56:07.313422   25556 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:07.313784   25556 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:07.313991   25556 main.go:141] libmachine: (ha-083979-m03) Calling .GetState
	I1018 08:56:07.315658   25556 status.go:371] ha-083979-m03 host status = "Running" (err=<nil>)
	I1018 08:56:07.315699   25556 host.go:66] Checking if "ha-083979-m03" exists ...
	I1018 08:56:07.316005   25556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:07.316050   25556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:07.329482   25556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37897
	I1018 08:56:07.329910   25556 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:07.330385   25556 main.go:141] libmachine: Using API Version  1
	I1018 08:56:07.330413   25556 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:07.330738   25556 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:07.330950   25556 main.go:141] libmachine: (ha-083979-m03) Calling .GetIP
	I1018 08:56:07.334001   25556 main.go:141] libmachine: (ha-083979-m03) DBG | domain ha-083979-m03 has defined MAC address 52:54:00:b3:a2:bd in network mk-ha-083979
	I1018 08:56:07.334546   25556 main.go:141] libmachine: (ha-083979-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a2:bd", ip: ""} in network mk-ha-083979: {Iface:virbr1 ExpiryTime:2025-10-18 09:52:00 +0000 UTC Type:0 Mac:52:54:00:b3:a2:bd Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-083979-m03 Clientid:01:52:54:00:b3:a2:bd}
	I1018 08:56:07.334576   25556 main.go:141] libmachine: (ha-083979-m03) DBG | domain ha-083979-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:b3:a2:bd in network mk-ha-083979
	I1018 08:56:07.334775   25556 host.go:66] Checking if "ha-083979-m03" exists ...
	I1018 08:56:07.335139   25556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:07.335196   25556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:07.349078   25556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38703
	I1018 08:56:07.349654   25556 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:07.350253   25556 main.go:141] libmachine: Using API Version  1
	I1018 08:56:07.350279   25556 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:07.350673   25556 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:07.350876   25556 main.go:141] libmachine: (ha-083979-m03) Calling .DriverName
	I1018 08:56:07.351108   25556 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 08:56:07.351127   25556 main.go:141] libmachine: (ha-083979-m03) Calling .GetSSHHostname
	I1018 08:56:07.354976   25556 main.go:141] libmachine: (ha-083979-m03) DBG | domain ha-083979-m03 has defined MAC address 52:54:00:b3:a2:bd in network mk-ha-083979
	I1018 08:56:07.355637   25556 main.go:141] libmachine: (ha-083979-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a2:bd", ip: ""} in network mk-ha-083979: {Iface:virbr1 ExpiryTime:2025-10-18 09:52:00 +0000 UTC Type:0 Mac:52:54:00:b3:a2:bd Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-083979-m03 Clientid:01:52:54:00:b3:a2:bd}
	I1018 08:56:07.355668   25556 main.go:141] libmachine: (ha-083979-m03) DBG | domain ha-083979-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:b3:a2:bd in network mk-ha-083979
	I1018 08:56:07.355856   25556 main.go:141] libmachine: (ha-083979-m03) Calling .GetSSHPort
	I1018 08:56:07.356091   25556 main.go:141] libmachine: (ha-083979-m03) Calling .GetSSHKeyPath
	I1018 08:56:07.356255   25556 main.go:141] libmachine: (ha-083979-m03) Calling .GetSSHUsername
	I1018 08:56:07.356406   25556 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/ha-083979-m03/id_rsa Username:docker}
	I1018 08:56:07.443535   25556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 08:56:07.465275   25556 kubeconfig.go:125] found "ha-083979" server: "https://192.168.39.254:8443"
	I1018 08:56:07.465311   25556 api_server.go:166] Checking apiserver status ...
	I1018 08:56:07.465386   25556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 08:56:07.491954   25556 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1770/cgroup
	W1018 08:56:07.504533   25556 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1770/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1018 08:56:07.504601   25556 ssh_runner.go:195] Run: ls
	I1018 08:56:07.510356   25556 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1018 08:56:07.518235   25556 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1018 08:56:07.518271   25556 status.go:463] ha-083979-m03 apiserver status = Running (err=<nil>)
	I1018 08:56:07.518281   25556 status.go:176] ha-083979-m03 status: &{Name:ha-083979-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 08:56:07.518308   25556 status.go:174] checking status of ha-083979-m04 ...
	I1018 08:56:07.518773   25556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:07.518821   25556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:07.533401   25556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35709
	I1018 08:56:07.533868   25556 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:07.534381   25556 main.go:141] libmachine: Using API Version  1
	I1018 08:56:07.534406   25556 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:07.534815   25556 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:07.535098   25556 main.go:141] libmachine: (ha-083979-m04) Calling .GetState
	I1018 08:56:07.537211   25556 status.go:371] ha-083979-m04 host status = "Running" (err=<nil>)
	I1018 08:56:07.537227   25556 host.go:66] Checking if "ha-083979-m04" exists ...
	I1018 08:56:07.537517   25556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:07.537551   25556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:07.552584   25556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41031
	I1018 08:56:07.553108   25556 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:07.553628   25556 main.go:141] libmachine: Using API Version  1
	I1018 08:56:07.553681   25556 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:07.554076   25556 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:07.554317   25556 main.go:141] libmachine: (ha-083979-m04) Calling .GetIP
	I1018 08:56:07.558096   25556 main.go:141] libmachine: (ha-083979-m04) DBG | domain ha-083979-m04 has defined MAC address 52:54:00:d9:b8:87 in network mk-ha-083979
	I1018 08:56:07.558657   25556 main.go:141] libmachine: (ha-083979-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b8:87", ip: ""} in network mk-ha-083979: {Iface:virbr1 ExpiryTime:2025-10-18 09:54:02 +0000 UTC Type:0 Mac:52:54:00:d9:b8:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-083979-m04 Clientid:01:52:54:00:d9:b8:87}
	I1018 08:56:07.558698   25556 main.go:141] libmachine: (ha-083979-m04) DBG | domain ha-083979-m04 has defined IP address 192.168.39.128 and MAC address 52:54:00:d9:b8:87 in network mk-ha-083979
	I1018 08:56:07.558911   25556 host.go:66] Checking if "ha-083979-m04" exists ...
	I1018 08:56:07.559331   25556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:07.559373   25556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:07.573881   25556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44915
	I1018 08:56:07.574411   25556 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:07.574896   25556 main.go:141] libmachine: Using API Version  1
	I1018 08:56:07.574943   25556 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:07.575334   25556 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:07.575547   25556 main.go:141] libmachine: (ha-083979-m04) Calling .DriverName
	I1018 08:56:07.575756   25556 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 08:56:07.575783   25556 main.go:141] libmachine: (ha-083979-m04) Calling .GetSSHHostname
	I1018 08:56:07.579135   25556 main.go:141] libmachine: (ha-083979-m04) DBG | domain ha-083979-m04 has defined MAC address 52:54:00:d9:b8:87 in network mk-ha-083979
	I1018 08:56:07.579623   25556 main.go:141] libmachine: (ha-083979-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b8:87", ip: ""} in network mk-ha-083979: {Iface:virbr1 ExpiryTime:2025-10-18 09:54:02 +0000 UTC Type:0 Mac:52:54:00:d9:b8:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-083979-m04 Clientid:01:52:54:00:d9:b8:87}
	I1018 08:56:07.579654   25556 main.go:141] libmachine: (ha-083979-m04) DBG | domain ha-083979-m04 has defined IP address 192.168.39.128 and MAC address 52:54:00:d9:b8:87 in network mk-ha-083979
	I1018 08:56:07.579875   25556 main.go:141] libmachine: (ha-083979-m04) Calling .GetSSHPort
	I1018 08:56:07.580135   25556 main.go:141] libmachine: (ha-083979-m04) Calling .GetSSHKeyPath
	I1018 08:56:07.580328   25556 main.go:141] libmachine: (ha-083979-m04) Calling .GetSSHUsername
	I1018 08:56:07.580536   25556 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/ha-083979-m04/id_rsa Username:docker}
	I1018 08:56:07.663627   25556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 08:56:07.686394   25556 status.go:176] ha-083979-m04 status: &{Name:ha-083979-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (82.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (37.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 node start m02 --alsologtostderr -v 5
E1018 08:56:26.739083    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/functional-679071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-083979 node start m02 --alsologtostderr -v 5: (35.897454055s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-083979 status --alsologtostderr -v 5: (1.121702324s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (37.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (384.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 stop --alsologtostderr -v 5
E1018 08:57:50.590119    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:58:42.876566    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/functional-679071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:59:10.582573    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/functional-679071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-083979 stop --alsologtostderr -v 5: (4m18.893548466s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 start --wait true --alsologtostderr -v 5
E1018 09:02:50.588209    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-083979 start --wait true --alsologtostderr -v 5: (2m5.439007203s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (384.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-083979 node delete m03 --alsologtostderr -v 5: (17.833899568s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (243.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 stop --alsologtostderr -v 5
E1018 09:03:42.875469    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/functional-679071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-083979 stop --alsologtostderr -v 5: (4m3.757388325s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-083979 status --alsologtostderr -v 5: exit status 7 (114.716394ms)

                                                
                                                
-- stdout --
	ha-083979
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-083979-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-083979-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:07:34.015765   29518 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:07:34.016089   29518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:07:34.016101   29518 out.go:374] Setting ErrFile to fd 2...
	I1018 09:07:34.016107   29518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:07:34.016308   29518 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-6063/.minikube/bin
	I1018 09:07:34.016509   29518 out.go:368] Setting JSON to false
	I1018 09:07:34.016539   29518 mustload.go:65] Loading cluster: ha-083979
	I1018 09:07:34.016622   29518 notify.go:220] Checking for updates...
	I1018 09:07:34.017194   29518 config.go:182] Loaded profile config "ha-083979": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:07:34.017263   29518 status.go:174] checking status of ha-083979 ...
	I1018 09:07:34.017815   29518 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:07:34.017852   29518 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:07:34.040381   29518 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41279
	I1018 09:07:34.040998   29518 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:07:34.041608   29518 main.go:141] libmachine: Using API Version  1
	I1018 09:07:34.041637   29518 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:07:34.042140   29518 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:07:34.042404   29518 main.go:141] libmachine: (ha-083979) Calling .GetState
	I1018 09:07:34.044479   29518 status.go:371] ha-083979 host status = "Stopped" (err=<nil>)
	I1018 09:07:34.044497   29518 status.go:384] host is not running, skipping remaining checks
	I1018 09:07:34.044504   29518 status.go:176] ha-083979 status: &{Name:ha-083979 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 09:07:34.044530   29518 status.go:174] checking status of ha-083979-m02 ...
	I1018 09:07:34.044989   29518 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:07:34.045048   29518 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:07:34.059636   29518 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38507
	I1018 09:07:34.060105   29518 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:07:34.060493   29518 main.go:141] libmachine: Using API Version  1
	I1018 09:07:34.060507   29518 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:07:34.060904   29518 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:07:34.061141   29518 main.go:141] libmachine: (ha-083979-m02) Calling .GetState
	I1018 09:07:34.063430   29518 status.go:371] ha-083979-m02 host status = "Stopped" (err=<nil>)
	I1018 09:07:34.063447   29518 status.go:384] host is not running, skipping remaining checks
	I1018 09:07:34.063458   29518 status.go:176] ha-083979-m02 status: &{Name:ha-083979-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 09:07:34.063476   29518 status.go:174] checking status of ha-083979-m04 ...
	I1018 09:07:34.063792   29518 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:07:34.063829   29518 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:07:34.077569   29518 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34109
	I1018 09:07:34.078119   29518 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:07:34.078623   29518 main.go:141] libmachine: Using API Version  1
	I1018 09:07:34.078648   29518 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:07:34.079044   29518 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:07:34.079262   29518 main.go:141] libmachine: (ha-083979-m04) Calling .GetState
	I1018 09:07:34.081293   29518 status.go:371] ha-083979-m04 host status = "Stopped" (err=<nil>)
	I1018 09:07:34.081310   29518 status.go:384] host is not running, skipping remaining checks
	I1018 09:07:34.081317   29518 status.go:176] ha-083979-m04 status: &{Name:ha-083979-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (243.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (106.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 09:07:50.583348    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:08:42.876184    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/functional-679071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-083979 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m45.602707606s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (106.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 node add --control-plane --alsologtostderr -v 5
E1018 09:10:05.946132    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/functional-679071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-083979 node add --control-plane --alsologtostderr -v 5: (1m14.656568166s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-083979 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (75.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.94s)

                                                
                                    
x
+
TestJSONOutput/start/Command (56.45s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-570881 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 09:10:53.663108    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-570881 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (56.449966478s)
--- PASS: TestJSONOutput/start/Command (56.45s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.82s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-570881 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.82s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-570881 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.27s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-570881 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-570881 --output=json --user=testUser: (7.271850855s)
--- PASS: TestJSONOutput/stop/Command (7.27s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-837737 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-837737 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (70.498486ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"400541c0-2c68-413a-80c2-34ed668ba035","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-837737] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3dc86fcf-2323-485f-ada7-d143cc253140","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21767"}}
	{"specversion":"1.0","id":"aa5e6205-c22a-4d98-9fe2-cf345278406a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2a001fc2-3406-4413-b62c-2ae384f2c304","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21767-6063/kubeconfig"}}
	{"specversion":"1.0","id":"2aba3713-d717-4c8e-9591-d5f7acb31ecd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-6063/.minikube"}}
	{"specversion":"1.0","id":"dfee8b26-6af5-4da0-94ef-1c587491e833","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"046aae75-a0c0-4015-bba1-08ff38634971","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"534a97dc-1e53-4c1d-b679-4aa3dac13ea5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-837737" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-837737
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (84.38s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-028267 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-028267 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (41.707448509s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-031127 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 09:12:50.583142    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-031127 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (39.806917837s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-028267
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-031127
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-031127" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-031127
helpers_test.go:175: Cleaning up "first-028267" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-028267
--- PASS: TestMinikubeProfile (84.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (21.36s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-236306 --memory=3072 --mount-string /tmp/TestMountStartserial874567298/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-236306 --memory=3072 --mount-string /tmp/TestMountStartserial874567298/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (20.354766351s)
--- PASS: TestMountStart/serial/StartWithMountFirst (21.36s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-236306 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-236306 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (23.51s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-256511 --memory=3072 --mount-string /tmp/TestMountStartserial874567298/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 09:13:42.876706    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/functional-679071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-256511 --memory=3072 --mount-string /tmp/TestMountStartserial874567298/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (22.509502875s)
--- PASS: TestMountStart/serial/StartWithMountSecond (23.51s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-256511 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-256511 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.75s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-236306 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-256511 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-256511 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-256511
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-256511: (1.297431853s)
--- PASS: TestMountStart/serial/Stop (1.30s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (17.33s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-256511
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-256511: (16.33093632s)
--- PASS: TestMountStart/serial/RestartStopped (17.33s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-256511 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-256511 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (131.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-407105 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-407105 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (2m10.905381314s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (131.34s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-407105 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-407105 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-407105 -- rollout status deployment/busybox: (4.929866344s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-407105 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-407105 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-407105 -- exec busybox-7b57f96db7-gkl45 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-407105 -- exec busybox-7b57f96db7-kcnqm -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-407105 -- exec busybox-7b57f96db7-gkl45 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-407105 -- exec busybox-7b57f96db7-kcnqm -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-407105 -- exec busybox-7b57f96db7-gkl45 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-407105 -- exec busybox-7b57f96db7-kcnqm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.50s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-407105 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-407105 -- exec busybox-7b57f96db7-gkl45 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-407105 -- exec busybox-7b57f96db7-gkl45 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-407105 -- exec busybox-7b57f96db7-kcnqm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-407105 -- exec busybox-7b57f96db7-kcnqm -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-407105 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-407105 -v=5 --alsologtostderr: (42.329377656s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.93s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-407105 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.61s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 cp testdata/cp-test.txt multinode-407105:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 ssh -n multinode-407105 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 cp multinode-407105:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3971848042/001/cp-test_multinode-407105.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 ssh -n multinode-407105 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 cp multinode-407105:/home/docker/cp-test.txt multinode-407105-m02:/home/docker/cp-test_multinode-407105_multinode-407105-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 ssh -n multinode-407105 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 ssh -n multinode-407105-m02 "sudo cat /home/docker/cp-test_multinode-407105_multinode-407105-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 cp multinode-407105:/home/docker/cp-test.txt multinode-407105-m03:/home/docker/cp-test_multinode-407105_multinode-407105-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 ssh -n multinode-407105 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 ssh -n multinode-407105-m03 "sudo cat /home/docker/cp-test_multinode-407105_multinode-407105-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 cp testdata/cp-test.txt multinode-407105-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 ssh -n multinode-407105-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 cp multinode-407105-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3971848042/001/cp-test_multinode-407105-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 ssh -n multinode-407105-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 cp multinode-407105-m02:/home/docker/cp-test.txt multinode-407105:/home/docker/cp-test_multinode-407105-m02_multinode-407105.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 ssh -n multinode-407105-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 ssh -n multinode-407105 "sudo cat /home/docker/cp-test_multinode-407105-m02_multinode-407105.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 cp multinode-407105-m02:/home/docker/cp-test.txt multinode-407105-m03:/home/docker/cp-test_multinode-407105-m02_multinode-407105-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 ssh -n multinode-407105-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 ssh -n multinode-407105-m03 "sudo cat /home/docker/cp-test_multinode-407105-m02_multinode-407105-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 cp testdata/cp-test.txt multinode-407105-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 ssh -n multinode-407105-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 cp multinode-407105-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3971848042/001/cp-test_multinode-407105-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 ssh -n multinode-407105-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 cp multinode-407105-m03:/home/docker/cp-test.txt multinode-407105:/home/docker/cp-test_multinode-407105-m03_multinode-407105.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 ssh -n multinode-407105-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 ssh -n multinode-407105 "sudo cat /home/docker/cp-test_multinode-407105-m03_multinode-407105.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 cp multinode-407105-m03:/home/docker/cp-test.txt multinode-407105-m02:/home/docker/cp-test_multinode-407105-m03_multinode-407105-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 ssh -n multinode-407105-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 ssh -n multinode-407105-m02 "sudo cat /home/docker/cp-test_multinode-407105-m03_multinode-407105-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.42s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-407105 node stop m03: (1.548498555s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-407105 status: exit status 7 (464.205139ms)

                                                
                                                
-- stdout --
	multinode-407105
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-407105-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-407105-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-407105 status --alsologtostderr: exit status 7 (451.831312ms)

                                                
                                                
-- stdout --
	multinode-407105
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-407105-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-407105-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:17:29.908657   37123 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:17:29.908978   37123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:17:29.908987   37123 out.go:374] Setting ErrFile to fd 2...
	I1018 09:17:29.908992   37123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:17:29.909171   37123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-6063/.minikube/bin
	I1018 09:17:29.909350   37123 out.go:368] Setting JSON to false
	I1018 09:17:29.909376   37123 mustload.go:65] Loading cluster: multinode-407105
	I1018 09:17:29.909463   37123 notify.go:220] Checking for updates...
	I1018 09:17:29.909906   37123 config.go:182] Loaded profile config "multinode-407105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:17:29.909940   37123 status.go:174] checking status of multinode-407105 ...
	I1018 09:17:29.910453   37123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:17:29.910483   37123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:17:29.925755   37123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42891
	I1018 09:17:29.926431   37123 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:17:29.927025   37123 main.go:141] libmachine: Using API Version  1
	I1018 09:17:29.927062   37123 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:17:29.927546   37123 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:17:29.927773   37123 main.go:141] libmachine: (multinode-407105) Calling .GetState
	I1018 09:17:29.930063   37123 status.go:371] multinode-407105 host status = "Running" (err=<nil>)
	I1018 09:17:29.930080   37123 host.go:66] Checking if "multinode-407105" exists ...
	I1018 09:17:29.930380   37123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:17:29.930419   37123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:17:29.944245   37123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37079
	I1018 09:17:29.944646   37123 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:17:29.945094   37123 main.go:141] libmachine: Using API Version  1
	I1018 09:17:29.945120   37123 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:17:29.945625   37123 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:17:29.945825   37123 main.go:141] libmachine: (multinode-407105) Calling .GetIP
	I1018 09:17:29.948917   37123 main.go:141] libmachine: (multinode-407105) DBG | domain multinode-407105 has defined MAC address 52:54:00:41:99:7b in network mk-multinode-407105
	I1018 09:17:29.949454   37123 main.go:141] libmachine: (multinode-407105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:99:7b", ip: ""} in network mk-multinode-407105: {Iface:virbr1 ExpiryTime:2025-10-18 10:14:33 +0000 UTC Type:0 Mac:52:54:00:41:99:7b Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:multinode-407105 Clientid:01:52:54:00:41:99:7b}
	I1018 09:17:29.949489   37123 main.go:141] libmachine: (multinode-407105) DBG | domain multinode-407105 has defined IP address 192.168.39.187 and MAC address 52:54:00:41:99:7b in network mk-multinode-407105
	I1018 09:17:29.949726   37123 host.go:66] Checking if "multinode-407105" exists ...
	I1018 09:17:29.950083   37123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:17:29.950128   37123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:17:29.965628   37123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34845
	I1018 09:17:29.966098   37123 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:17:29.966603   37123 main.go:141] libmachine: Using API Version  1
	I1018 09:17:29.966628   37123 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:17:29.967139   37123 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:17:29.967374   37123 main.go:141] libmachine: (multinode-407105) Calling .DriverName
	I1018 09:17:29.967582   37123 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:17:29.967614   37123 main.go:141] libmachine: (multinode-407105) Calling .GetSSHHostname
	I1018 09:17:29.971331   37123 main.go:141] libmachine: (multinode-407105) DBG | domain multinode-407105 has defined MAC address 52:54:00:41:99:7b in network mk-multinode-407105
	I1018 09:17:29.971776   37123 main.go:141] libmachine: (multinode-407105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:99:7b", ip: ""} in network mk-multinode-407105: {Iface:virbr1 ExpiryTime:2025-10-18 10:14:33 +0000 UTC Type:0 Mac:52:54:00:41:99:7b Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:multinode-407105 Clientid:01:52:54:00:41:99:7b}
	I1018 09:17:29.971817   37123 main.go:141] libmachine: (multinode-407105) DBG | domain multinode-407105 has defined IP address 192.168.39.187 and MAC address 52:54:00:41:99:7b in network mk-multinode-407105
	I1018 09:17:29.971978   37123 main.go:141] libmachine: (multinode-407105) Calling .GetSSHPort
	I1018 09:17:29.972165   37123 main.go:141] libmachine: (multinode-407105) Calling .GetSSHKeyPath
	I1018 09:17:29.972327   37123 main.go:141] libmachine: (multinode-407105) Calling .GetSSHUsername
	I1018 09:17:29.972446   37123 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/multinode-407105/id_rsa Username:docker}
	I1018 09:17:30.054375   37123 ssh_runner.go:195] Run: systemctl --version
	I1018 09:17:30.062252   37123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:17:30.082384   37123 kubeconfig.go:125] found "multinode-407105" server: "https://192.168.39.187:8443"
	I1018 09:17:30.082424   37123 api_server.go:166] Checking apiserver status ...
	I1018 09:17:30.082462   37123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:17:30.109907   37123 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1337/cgroup
	W1018 09:17:30.122664   37123 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1337/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:17:30.122751   37123 ssh_runner.go:195] Run: ls
	I1018 09:17:30.128660   37123 api_server.go:253] Checking apiserver healthz at https://192.168.39.187:8443/healthz ...
	I1018 09:17:30.133650   37123 api_server.go:279] https://192.168.39.187:8443/healthz returned 200:
	ok
	I1018 09:17:30.133680   37123 status.go:463] multinode-407105 apiserver status = Running (err=<nil>)
	I1018 09:17:30.133692   37123 status.go:176] multinode-407105 status: &{Name:multinode-407105 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 09:17:30.133715   37123 status.go:174] checking status of multinode-407105-m02 ...
	I1018 09:17:30.134092   37123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:17:30.134140   37123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:17:30.148630   37123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37819
	I1018 09:17:30.149319   37123 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:17:30.149862   37123 main.go:141] libmachine: Using API Version  1
	I1018 09:17:30.149883   37123 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:17:30.150319   37123 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:17:30.150560   37123 main.go:141] libmachine: (multinode-407105-m02) Calling .GetState
	I1018 09:17:30.152798   37123 status.go:371] multinode-407105-m02 host status = "Running" (err=<nil>)
	I1018 09:17:30.152818   37123 host.go:66] Checking if "multinode-407105-m02" exists ...
	I1018 09:17:30.153254   37123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:17:30.153333   37123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:17:30.167271   37123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42361
	I1018 09:17:30.167814   37123 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:17:30.168342   37123 main.go:141] libmachine: Using API Version  1
	I1018 09:17:30.168363   37123 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:17:30.168734   37123 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:17:30.169016   37123 main.go:141] libmachine: (multinode-407105-m02) Calling .GetIP
	I1018 09:17:30.172512   37123 main.go:141] libmachine: (multinode-407105-m02) DBG | domain multinode-407105-m02 has defined MAC address 52:54:00:f4:d3:26 in network mk-multinode-407105
	I1018 09:17:30.173099   37123 main.go:141] libmachine: (multinode-407105-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:d3:26", ip: ""} in network mk-multinode-407105: {Iface:virbr1 ExpiryTime:2025-10-18 10:16:00 +0000 UTC Type:0 Mac:52:54:00:f4:d3:26 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-407105-m02 Clientid:01:52:54:00:f4:d3:26}
	I1018 09:17:30.173140   37123 main.go:141] libmachine: (multinode-407105-m02) DBG | domain multinode-407105-m02 has defined IP address 192.168.39.211 and MAC address 52:54:00:f4:d3:26 in network mk-multinode-407105
	I1018 09:17:30.173250   37123 host.go:66] Checking if "multinode-407105-m02" exists ...
	I1018 09:17:30.173600   37123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:17:30.173647   37123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:17:30.189696   37123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42713
	I1018 09:17:30.190228   37123 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:17:30.190694   37123 main.go:141] libmachine: Using API Version  1
	I1018 09:17:30.190722   37123 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:17:30.191121   37123 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:17:30.191334   37123 main.go:141] libmachine: (multinode-407105-m02) Calling .DriverName
	I1018 09:17:30.191502   37123 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:17:30.191518   37123 main.go:141] libmachine: (multinode-407105-m02) Calling .GetSSHHostname
	I1018 09:17:30.195364   37123 main.go:141] libmachine: (multinode-407105-m02) DBG | domain multinode-407105-m02 has defined MAC address 52:54:00:f4:d3:26 in network mk-multinode-407105
	I1018 09:17:30.195975   37123 main.go:141] libmachine: (multinode-407105-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:d3:26", ip: ""} in network mk-multinode-407105: {Iface:virbr1 ExpiryTime:2025-10-18 10:16:00 +0000 UTC Type:0 Mac:52:54:00:f4:d3:26 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-407105-m02 Clientid:01:52:54:00:f4:d3:26}
	I1018 09:17:30.196010   37123 main.go:141] libmachine: (multinode-407105-m02) DBG | domain multinode-407105-m02 has defined IP address 192.168.39.211 and MAC address 52:54:00:f4:d3:26 in network mk-multinode-407105
	I1018 09:17:30.196270   37123 main.go:141] libmachine: (multinode-407105-m02) Calling .GetSSHPort
	I1018 09:17:30.196485   37123 main.go:141] libmachine: (multinode-407105-m02) Calling .GetSSHKeyPath
	I1018 09:17:30.196627   37123 main.go:141] libmachine: (multinode-407105-m02) Calling .GetSSHUsername
	I1018 09:17:30.196764   37123 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21767-6063/.minikube/machines/multinode-407105-m02/id_rsa Username:docker}
	I1018 09:17:30.275330   37123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:17:30.291853   37123 status.go:176] multinode-407105-m02 status: &{Name:multinode-407105-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1018 09:17:30.291893   37123 status.go:174] checking status of multinode-407105-m03 ...
	I1018 09:17:30.292261   37123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:17:30.292312   37123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:17:30.306802   37123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41015
	I1018 09:17:30.307303   37123 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:17:30.307774   37123 main.go:141] libmachine: Using API Version  1
	I1018 09:17:30.307796   37123 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:17:30.308175   37123 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:17:30.308533   37123 main.go:141] libmachine: (multinode-407105-m03) Calling .GetState
	I1018 09:17:30.310754   37123 status.go:371] multinode-407105-m03 host status = "Stopped" (err=<nil>)
	I1018 09:17:30.310770   37123 status.go:384] host is not running, skipping remaining checks
	I1018 09:17:30.310775   37123 status.go:176] multinode-407105-m03 status: &{Name:multinode-407105-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.47s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 node start m03 -v=5 --alsologtostderr
E1018 09:17:50.583132    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-407105 node start m03 -v=5 --alsologtostderr: (38.2632225s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.91s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (335.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-407105
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-407105
E1018 09:18:42.876575    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/functional-679071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-407105: (2m49.218428868s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-407105 --wait=true -v=5 --alsologtostderr
E1018 09:22:50.583299    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:23:42.875534    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/functional-679071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-407105 --wait=true -v=5 --alsologtostderr: (2m46.644010967s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-407105
--- PASS: TestMultiNode/serial/RestartKeepsNodes (335.96s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-407105 node delete m03: (2.206493735s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.77s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (165.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-407105 stop: (2m45.696051404s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-407105 status: exit status 7 (95.576365ms)

                                                
                                                
-- stdout --
	multinode-407105
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-407105-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-407105 status --alsologtostderr: exit status 7 (82.515186ms)

                                                
                                                
-- stdout --
	multinode-407105
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-407105-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:26:33.789835   40427 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:26:33.790107   40427 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:26:33.790116   40427 out.go:374] Setting ErrFile to fd 2...
	I1018 09:26:33.790120   40427 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:26:33.790330   40427 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-6063/.minikube/bin
	I1018 09:26:33.790496   40427 out.go:368] Setting JSON to false
	I1018 09:26:33.790520   40427 mustload.go:65] Loading cluster: multinode-407105
	I1018 09:26:33.790568   40427 notify.go:220] Checking for updates...
	I1018 09:26:33.790873   40427 config.go:182] Loaded profile config "multinode-407105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:26:33.790886   40427 status.go:174] checking status of multinode-407105 ...
	I1018 09:26:33.791292   40427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:26:33.791328   40427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:26:33.804985   40427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36171
	I1018 09:26:33.805460   40427 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:26:33.805979   40427 main.go:141] libmachine: Using API Version  1
	I1018 09:26:33.806001   40427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:26:33.806433   40427 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:26:33.806645   40427 main.go:141] libmachine: (multinode-407105) Calling .GetState
	I1018 09:26:33.808483   40427 status.go:371] multinode-407105 host status = "Stopped" (err=<nil>)
	I1018 09:26:33.808501   40427 status.go:384] host is not running, skipping remaining checks
	I1018 09:26:33.808509   40427 status.go:176] multinode-407105 status: &{Name:multinode-407105 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 09:26:33.808551   40427 status.go:174] checking status of multinode-407105-m02 ...
	I1018 09:26:33.808986   40427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:26:33.809046   40427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:26:33.823263   40427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38903
	I1018 09:26:33.823645   40427 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:26:33.824119   40427 main.go:141] libmachine: Using API Version  1
	I1018 09:26:33.824141   40427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:26:33.824455   40427 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:26:33.824620   40427 main.go:141] libmachine: (multinode-407105-m02) Calling .GetState
	I1018 09:26:33.826479   40427 status.go:371] multinode-407105-m02 host status = "Stopped" (err=<nil>)
	I1018 09:26:33.826494   40427 status.go:384] host is not running, skipping remaining checks
	I1018 09:26:33.826499   40427 status.go:176] multinode-407105-m02 status: &{Name:multinode-407105-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (165.87s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (86.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-407105 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 09:26:45.948384    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/functional-679071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:27:33.665120    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:27:50.583491    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-407105 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m25.819838083s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-407105 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (86.37s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-407105
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-407105-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-407105-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (68.369751ms)

                                                
                                                
-- stdout --
	* [multinode-407105-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-6063/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-6063/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-407105-m02' is duplicated with machine name 'multinode-407105-m02' in profile 'multinode-407105'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-407105-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-407105-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (41.859665351s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-407105
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-407105: exit status 80 (226.430659ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-407105 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-407105-m03 already exists in multinode-407105-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-407105-m03
E1018 09:28:42.875175    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/functional-679071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.03s)

                                                
                                    
x
+
TestScheduledStopUnix (112.18s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-816120 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-816120 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (40.442980746s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-816120 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-816120 -n scheduled-stop-816120
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-816120 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1018 09:32:10.774647    9956 retry.go:31] will retry after 89.427µs: open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/scheduled-stop-816120/pid: no such file or directory
I1018 09:32:10.775806    9956 retry.go:31] will retry after 169.999µs: open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/scheduled-stop-816120/pid: no such file or directory
I1018 09:32:10.777033    9956 retry.go:31] will retry after 269.937µs: open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/scheduled-stop-816120/pid: no such file or directory
I1018 09:32:10.778203    9956 retry.go:31] will retry after 237.91µs: open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/scheduled-stop-816120/pid: no such file or directory
I1018 09:32:10.779343    9956 retry.go:31] will retry after 267.969µs: open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/scheduled-stop-816120/pid: no such file or directory
I1018 09:32:10.780473    9956 retry.go:31] will retry after 1.129261ms: open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/scheduled-stop-816120/pid: no such file or directory
I1018 09:32:10.782708    9956 retry.go:31] will retry after 1.069152ms: open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/scheduled-stop-816120/pid: no such file or directory
I1018 09:32:10.783882    9956 retry.go:31] will retry after 1.093634ms: open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/scheduled-stop-816120/pid: no such file or directory
I1018 09:32:10.785082    9956 retry.go:31] will retry after 2.633698ms: open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/scheduled-stop-816120/pid: no such file or directory
I1018 09:32:10.788364    9956 retry.go:31] will retry after 4.730805ms: open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/scheduled-stop-816120/pid: no such file or directory
I1018 09:32:10.793648    9956 retry.go:31] will retry after 3.984058ms: open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/scheduled-stop-816120/pid: no such file or directory
I1018 09:32:10.797964    9956 retry.go:31] will retry after 9.725401ms: open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/scheduled-stop-816120/pid: no such file or directory
I1018 09:32:10.808347    9956 retry.go:31] will retry after 15.290647ms: open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/scheduled-stop-816120/pid: no such file or directory
I1018 09:32:10.824612    9956 retry.go:31] will retry after 28.511831ms: open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/scheduled-stop-816120/pid: no such file or directory
I1018 09:32:10.853947    9956 retry.go:31] will retry after 26.653689ms: open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/scheduled-stop-816120/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-816120 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-816120 -n scheduled-stop-816120
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-816120
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-816120 --schedule 15s
E1018 09:32:50.589836    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-816120
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-816120: exit status 7 (71.793031ms)

                                                
                                                
-- stdout --
	scheduled-stop-816120
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-816120 -n scheduled-stop-816120
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-816120 -n scheduled-stop-816120: exit status 7 (64.824361ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-816120" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-816120
--- PASS: TestScheduledStopUnix (112.18s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (149.92s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3902647756 start -p running-upgrade-947647 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 09:33:42.875213    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/functional-679071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3902647756 start -p running-upgrade-947647 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m42.757196167s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-947647 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-947647 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (45.516575295s)
helpers_test.go:175: Cleaning up "running-upgrade-947647" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-947647
--- PASS: TestRunningBinaryUpgrade (149.92s)

                                                
                                    
x
+
TestKubernetesUpgrade (265.69s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-178467 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-178467 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (59.524979728s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-178467
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-178467: (2.066513295s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-178467 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-178467 status --format={{.Host}}: exit status 7 (87.686047ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-178467 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-178467 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m12.1091458s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-178467 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-178467 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-178467 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 106 (124.380401ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-178467] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-6063/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-6063/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-178467
	    minikube start -p kubernetes-upgrade-178467 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1784672 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-178467 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-178467 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-178467 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (2m10.650193163s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-178467" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-178467
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-178467: (1.03783608s)
--- PASS: TestKubernetesUpgrade (265.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-914044 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-914044 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (84.700087ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-914044] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-6063/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-6063/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (82.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-914044 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-914044 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m22.655557482s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-914044 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (82.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (45.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-914044 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-914044 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (44.772098315s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-914044 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-914044 status -o json: exit status 2 (278.071926ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-914044","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-914044
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (45.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-081586 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-081586 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (126.837239ms)

                                                
                                                
-- stdout --
	* [false-081586] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-6063/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-6063/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:35:30.075523   46509 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:35:30.075843   46509 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:35:30.075857   46509 out.go:374] Setting ErrFile to fd 2...
	I1018 09:35:30.075863   46509 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:35:30.076202   46509 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-6063/.minikube/bin
	I1018 09:35:30.076814   46509 out.go:368] Setting JSON to false
	I1018 09:35:30.078107   46509 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4680,"bootTime":1760775450,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:35:30.078184   46509 start.go:141] virtualization: kvm guest
	I1018 09:35:30.081466   46509 out.go:179] * [false-081586] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:35:30.082956   46509 notify.go:220] Checking for updates...
	I1018 09:35:30.083481   46509 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 09:35:30.085024   46509 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:35:30.086641   46509 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-6063/kubeconfig
	I1018 09:35:30.088577   46509 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-6063/.minikube
	I1018 09:35:30.089961   46509 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:35:30.091214   46509 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:35:30.093133   46509 config.go:182] Loaded profile config "NoKubernetes-914044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1018 09:35:30.093303   46509 config.go:182] Loaded profile config "kubernetes-upgrade-178467": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:35:30.093406   46509 config.go:182] Loaded profile config "running-upgrade-947647": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1018 09:35:30.093529   46509 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:35:30.134525   46509 out.go:179] * Using the kvm2 driver based on user configuration
	I1018 09:35:30.135928   46509 start.go:305] selected driver: kvm2
	I1018 09:35:30.135948   46509 start.go:925] validating driver "kvm2" against <nil>
	I1018 09:35:30.135963   46509 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:35:30.138449   46509 out.go:203] 
	W1018 09:35:30.139754   46509 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1018 09:35:30.140967   46509 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-081586 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-081586

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-081586

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-081586

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-081586

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-081586

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-081586

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-081586

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-081586

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-081586

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-081586

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081586"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081586"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081586"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-081586

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081586"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081586"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-081586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-081586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-081586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-081586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-081586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-081586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-081586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-081586" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081586"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081586"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081586"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081586"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081586"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-081586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-081586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-081586" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081586"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081586"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081586"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081586"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081586"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21767-6063/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 18 Oct 2025 09:35:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.121:8443
name: kubernetes-upgrade-178467
contexts:
- context:
cluster: kubernetes-upgrade-178467
user: kubernetes-upgrade-178467
name: kubernetes-upgrade-178467
current-context: kubernetes-upgrade-178467
kind: Config
users:
- name: kubernetes-upgrade-178467
user:
client-certificate: /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/kubernetes-upgrade-178467/client.crt
client-key: /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/kubernetes-upgrade-178467/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-081586

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081586"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081586"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081586"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081586"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081586"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081586"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081586"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081586"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081586"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081586"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081586"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081586"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081586"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081586"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081586"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081586"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081586"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081586"

                                                
                                                
----------------------- debugLogs end: false-081586 [took: 3.188317273s] --------------------------------
helpers_test.go:175: Cleaning up "false-081586" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-081586
--- PASS: TestNetworkPlugins/group/false (3.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (24.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-914044 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-914044 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (24.881635506s)
--- PASS: TestNoKubernetes/serial/Start (24.88s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.89s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (111.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1375585615 start -p stopped-upgrade-253577 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1375585615 start -p stopped-upgrade-253577 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (59.606850163s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1375585615 -p stopped-upgrade-253577 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1375585615 -p stopped-upgrade-253577 stop: (1.637513634s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-253577 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-253577 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (50.469224769s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (111.71s)

                                                
                                    
x
+
TestPause/serial/Start (106.65s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-251981 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-251981 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m46.653076826s)
--- PASS: TestPause/serial/Start (106.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-914044 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-914044 "sudo systemctl is-active --quiet service kubelet": exit status 1 (196.829485ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-914044
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-914044: (1.255116668s)
--- PASS: TestNoKubernetes/serial/Stop (1.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (60.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-914044 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-914044 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m0.490320466s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (60.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-914044 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-914044 "sudo systemctl is-active --quiet service kubelet": exit status 1 (208.503259ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-253577
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-253577: (1.177884916s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (102.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-874951 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
E1018 09:38:42.874934    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/functional-679071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-874951 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (1m42.079124223s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (102.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-263234 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-263234 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (54.97332468s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (95.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-063875 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-063875 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m35.978967333s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (95.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-263234 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8a68ef86-e590-4b4f-93b3-d62e07be8dac] Pending
helpers_test.go:352: "busybox" [8a68ef86-e590-4b4f-93b3-d62e07be8dac] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8a68ef86-e590-4b4f-93b3-d62e07be8dac] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004595621s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-263234 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-263234 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-263234 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.06713326s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-263234 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (83.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-263234 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-263234 --alsologtostderr -v=3: (1m23.243792641s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (83.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-874951 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [6008b32c-c2c3-4552-8f9d-f3b16cf762ae] Pending
helpers_test.go:352: "busybox" [6008b32c-c2c3-4552-8f9d-f3b16cf762ae] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [6008b32c-c2c3-4552-8f9d-f3b16cf762ae] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004386857s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-874951 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-874951 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-874951 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.065026469s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-874951 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (70.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-874951 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-874951 --alsologtostderr -v=3: (1m10.923073755s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (70.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-063875 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [5dc41f9c-2cfb-4c68-b913-9298d0fa79ca] Pending
helpers_test.go:352: "busybox" [5dc41f9c-2cfb-4c68-b913-9298d0fa79ca] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [5dc41f9c-2cfb-4c68-b913-9298d0fa79ca] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.005043919s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-063875 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-063875 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-063875 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.009736696s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-063875 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (85.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-063875 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-063875 --alsologtostderr -v=3: (1m25.405143534s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (85.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-263234 -n default-k8s-diff-port-263234
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-263234 -n default-k8s-diff-port-263234: exit status 7 (64.547572ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-263234 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (46.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-263234 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-263234 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (46.239038467s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-263234 -n default-k8s-diff-port-263234
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (46.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-874951 -n old-k8s-version-874951
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-874951 -n old-k8s-version-874951: exit status 7 (67.480376ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-874951 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (58.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-874951 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-874951 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (57.691569759s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-874951 -n old-k8s-version-874951
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (58.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (90.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-701250 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-701250 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m30.842312643s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (90.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-bj89t" [66568804-f0b6-4dfc-b298-ce9077ee26ea] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-bj89t" [66568804-f0b6-4dfc-b298-ce9077ee26ea] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.004972993s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-063875 -n embed-certs-063875
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-063875 -n embed-certs-063875: exit status 7 (91.053445ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-063875 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (55.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-063875 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-063875 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (54.690086368s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-063875 -n embed-certs-063875
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (55.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-bj89t" [66568804-f0b6-4dfc-b298-ce9077ee26ea] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005031545s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-263234 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-263234 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-263234 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-263234 --alsologtostderr -v=1: (1.165600963s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-263234 -n default-k8s-diff-port-263234
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-263234 -n default-k8s-diff-port-263234: exit status 2 (327.940254ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-263234 -n default-k8s-diff-port-263234
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-263234 -n default-k8s-diff-port-263234: exit status 2 (305.610841ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-263234 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p default-k8s-diff-port-263234 --alsologtostderr -v=1: (1.034994332s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-263234 -n default-k8s-diff-port-263234
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-263234 -n default-k8s-diff-port-263234
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (15.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-w2m7m" [42854228-bdee-444b-83bc-6a8fa2724a34] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-w2m7m" [42854228-bdee-444b-83bc-6a8fa2724a34] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.00458313s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (15.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (56.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-513333 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-513333 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (56.436744699s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (56.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-w2m7m" [42854228-bdee-444b-83bc-6a8fa2724a34] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005389899s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-874951 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-874951 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-874951 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-874951 --alsologtostderr -v=1: (1.421468432s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-874951 -n old-k8s-version-874951
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-874951 -n old-k8s-version-874951: exit status 2 (322.998768ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-874951 -n old-k8s-version-874951
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-874951 -n old-k8s-version-874951: exit status 2 (276.212997ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-874951 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p old-k8s-version-874951 --alsologtostderr -v=1: (1.086345719s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-874951 -n old-k8s-version-874951
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-874951 -n old-k8s-version-874951
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (88.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-081586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 09:42:50.583557    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-081586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m28.550348809s)
--- PASS: TestNetworkPlugins/group/auto/Start (88.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4dzkq" [3c6b2869-e7ee-40cf-85b8-a9ef76d683fe] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004184189s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4dzkq" [3c6b2869-e7ee-40cf-85b8-a9ef76d683fe] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004006366s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-063875 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-701250 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [5cd6274e-18b0-4d17-abc3-da87b5e4653a] Pending
helpers_test.go:352: "busybox" [5cd6274e-18b0-4d17-abc3-da87b5e4653a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [5cd6274e-18b0-4d17-abc3-da87b5e4653a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.005009377s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-701250 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-063875 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-063875 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-063875 -n embed-certs-063875
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-063875 -n embed-certs-063875: exit status 2 (300.310852ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-063875 -n embed-certs-063875
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-063875 -n embed-certs-063875: exit status 2 (277.273149ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-063875 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-063875 -n embed-certs-063875
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-063875 -n embed-certs-063875
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (73.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-081586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-081586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m13.547059955s)
--- PASS: TestNetworkPlugins/group/calico/Start (73.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-701250 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-701250 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.276030844s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-701250 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-513333 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-513333 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.419131346s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (88.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-701250 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-701250 --alsologtostderr -v=3: (1m28.442107739s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (88.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-513333 --alsologtostderr -v=3
E1018 09:43:25.950196    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/functional-679071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-513333 --alsologtostderr -v=3: (7.438805706s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-513333 -n newest-cni-513333
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-513333 -n newest-cni-513333: exit status 7 (76.176923ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-513333 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (44.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-513333 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1018 09:43:42.875803    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/functional-679071/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:44:13.666451    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/addons-493204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-513333 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (44.603758053s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-513333 -n newest-cni-513333
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (44.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-081586 "pgrep -a kubelet"
I1018 09:44:14.157187    9956 config.go:182] Loaded profile config "auto-081586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-081586 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ndksf" [2506317b-9b41-4cd5-ba04-7bc20c267e02] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ndksf" [2506317b-9b41-4cd5-ba04-7bc20c267e02] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004714076s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-513333 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-513333 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-513333 -n newest-cni-513333
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-513333 -n newest-cni-513333: exit status 2 (325.34793ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-513333 -n newest-cni-513333
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-513333 -n newest-cni-513333: exit status 2 (304.854692ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-513333 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-513333 -n newest-cni-513333
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-513333 -n newest-cni-513333
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (74.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-081586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-081586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m14.602065768s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (74.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-081586 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-081586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-081586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-bp9fs" [b172e041-22b1-4a7d-aa01-2ecdacd9554a] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-bp9fs" [b172e041-22b1-4a7d-aa01-2ecdacd9554a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005090604s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-081586 "pgrep -a kubelet"
I1018 09:44:35.291546    9956 config.go:182] Loaded profile config "calico-081586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-081586 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rxj5b" [9e0da482-25c8-4891-85dc-595971b18ebc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rxj5b" [9e0da482-25c8-4891-85dc-595971b18ebc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.005324666s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (64.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-081586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 09:44:41.010859    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/default-k8s-diff-port-263234/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:44:41.018087    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/default-k8s-diff-port-263234/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:44:41.029725    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/default-k8s-diff-port-263234/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:44:41.051183    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/default-k8s-diff-port-263234/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:44:41.092682    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/default-k8s-diff-port-263234/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:44:41.174277    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/default-k8s-diff-port-263234/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:44:41.336297    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/default-k8s-diff-port-263234/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:44:41.658453    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/default-k8s-diff-port-263234/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:44:42.299804    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/default-k8s-diff-port-263234/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:44:43.582210    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/default-k8s-diff-port-263234/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:44:46.143617    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/default-k8s-diff-port-263234/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-081586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m4.966933882s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (64.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-081586 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-081586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-081586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-701250 -n no-preload-701250
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-701250 -n no-preload-701250: exit status 7 (96.039603ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-701250 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (74.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-701250 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1018 09:44:51.265727    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/default-k8s-diff-port-263234/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:44:58.932572    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/old-k8s-version-874951/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:44:58.939108    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/old-k8s-version-874951/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:44:58.950599    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/old-k8s-version-874951/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:44:58.972043    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/old-k8s-version-874951/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:44:59.013512    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/old-k8s-version-874951/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:44:59.095705    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/old-k8s-version-874951/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:44:59.256998    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/old-k8s-version-874951/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:44:59.578568    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/old-k8s-version-874951/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:45:00.221100    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/old-k8s-version-874951/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:45:01.502426    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/old-k8s-version-874951/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:45:01.508013    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/default-k8s-diff-port-263234/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-701250 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m14.364678635s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-701250 -n no-preload-701250
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (74.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (93.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-081586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 09:45:09.185613    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/old-k8s-version-874951/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:45:19.428006    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/old-k8s-version-874951/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:45:21.989831    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/default-k8s-diff-port-263234/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-081586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m33.389857963s)
--- PASS: TestNetworkPlugins/group/flannel/Start (93.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-081586 "pgrep -a kubelet"
I1018 09:45:34.197503    9956 config.go:182] Loaded profile config "custom-flannel-081586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-081586 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-56fb7" [7dbd4932-4588-42b7-9e09-7e1cbf6beab8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-56fb7" [7dbd4932-4588-42b7-9e09-7e1cbf6beab8] Running
E1018 09:45:39.910188    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/old-k8s-version-874951/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.005316674s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-081586 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-081586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-081586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-dp6nd" [58e80668-617a-41c4-93a7-223c4b027fd0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006517986s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-081586 "pgrep -a kubelet"
I1018 09:45:51.966330    9956 config.go:182] Loaded profile config "kindnet-081586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-081586 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-p642w" [46cb19c5-3f35-4f37-b5c7-e221283537cb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-p642w" [46cb19c5-3f35-4f37-b5c7-e221283537cb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.00519682s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (81.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-081586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-081586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m21.079253524s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (81.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-081586 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-081586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-081586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-wmcsb" [980938f0-f7ac-4be9-99e0-bb61a58a6599] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-wmcsb" [980938f0-f7ac-4be9-99e0-bb61a58a6599] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.006626887s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-wmcsb" [980938f0-f7ac-4be9-99e0-bb61a58a6599] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003664714s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-701250 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (87.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-081586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-081586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m27.330654886s)
--- PASS: TestNetworkPlugins/group/bridge/Start (87.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-701250 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-701250 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-701250 -n no-preload-701250
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-701250 -n no-preload-701250: exit status 2 (288.039953ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-701250 -n no-preload-701250
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-701250 -n no-preload-701250: exit status 2 (281.238888ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-701250 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-701250 -n no-preload-701250
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-701250 -n no-preload-701250
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-4r64w" [7a1ec968-140e-4507-8c8a-c04d67283525] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004096177s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-081586 "pgrep -a kubelet"
I1018 09:46:43.844499    9956 config.go:182] Loaded profile config "flannel-081586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-081586 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-nmrvd" [88aeb479-4759-4440-836e-c4219bd3cc34] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-nmrvd" [88aeb479-4759-4440-836e-c4219bd3cc34] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.211808447s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-081586 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-081586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-081586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-081586 "pgrep -a kubelet"
I1018 09:47:24.620865    9956 config.go:182] Loaded profile config "enable-default-cni-081586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-081586 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-g66zw" [a5e1fb8e-d38f-4786-83ca-e4f3c84aec28] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1018 09:47:24.874027    9956 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/default-k8s-diff-port-263234/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-g66zw" [a5e1fb8e-d38f-4786-83ca-e4f3c84aec28] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004726892s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-081586 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-081586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-081586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-081586 "pgrep -a kubelet"
I1018 09:47:49.164581    9956 config.go:182] Loaded profile config "bridge-081586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-081586 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fpsqj" [f665d30f-3092-4e4b-ab76-9426476bbe81] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fpsqj" [f665d30f-3092-4e4b-ab76-9426476bbe81] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003528843s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-081586 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-081586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-081586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.23s)

                                                
                                    

Test skip (40/324)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.35
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
129 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
130 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
131 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
132 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
133 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
134 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
135 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
136 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
158 TestGvisorAddon 0
180 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
261 TestStartStop/group/disable-driver-mounts 0.18
267 TestNetworkPlugins/group/kubenet 5.23
276 TestNetworkPlugins/group/cilium 4.19
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.35s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-493204 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.35s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-622316" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-622316
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-081586 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-081586

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-081586

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-081586

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-081586

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-081586

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-081586

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-081586

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-081586

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-081586

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-081586

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081586"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081586"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081586"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-081586

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081586"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081586"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-081586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-081586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-081586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-081586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-081586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-081586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-081586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-081586" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081586"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081586"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081586"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081586"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081586"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-081586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-081586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-081586" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081586"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081586"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081586"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081586"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081586"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21767-6063/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 18 Oct 2025 09:34:43 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.61.145:8443
name: NoKubernetes-914044
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21767-6063/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 18 Oct 2025 09:35:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.121:8443
name: kubernetes-upgrade-178467
contexts:
- context:
cluster: NoKubernetes-914044
extensions:
- extension:
last-update: Sat, 18 Oct 2025 09:34:43 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-914044
name: NoKubernetes-914044
- context:
cluster: kubernetes-upgrade-178467
user: kubernetes-upgrade-178467
name: kubernetes-upgrade-178467
current-context: kubernetes-upgrade-178467
kind: Config
users:
- name: NoKubernetes-914044
user:
client-certificate: /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/NoKubernetes-914044/client.crt
client-key: /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/NoKubernetes-914044/client.key
- name: kubernetes-upgrade-178467
user:
client-certificate: /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/kubernetes-upgrade-178467/client.crt
client-key: /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/kubernetes-upgrade-178467/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-081586

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081586"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081586"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081586"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081586"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081586"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081586"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081586"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081586"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081586"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081586"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081586"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081586"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081586"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081586"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081586"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081586"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081586"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081586"

                                                
                                                
----------------------- debugLogs end: kubenet-081586 [took: 5.044543686s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-081586" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-081586
--- SKIP: TestNetworkPlugins/group/kubenet (5.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-081586 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-081586

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-081586

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-081586

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-081586

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-081586

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-081586

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-081586

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-081586

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-081586

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-081586

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081586"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081586"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081586"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-081586

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081586"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081586"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-081586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-081586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-081586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-081586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-081586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-081586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-081586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-081586" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081586"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081586"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081586"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081586"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081586"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-081586

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-081586

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-081586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-081586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-081586

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-081586

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-081586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-081586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-081586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-081586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-081586" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081586"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081586"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081586"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081586"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081586"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21767-6063/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 18 Oct 2025 09:35:34 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.121:8443
name: kubernetes-upgrade-178467
contexts:
- context:
cluster: kubernetes-upgrade-178467
extensions:
- extension:
last-update: Sat, 18 Oct 2025 09:35:34 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-178467
name: kubernetes-upgrade-178467
current-context: kubernetes-upgrade-178467
kind: Config
users:
- name: kubernetes-upgrade-178467
user:
client-certificate: /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/kubernetes-upgrade-178467/client.crt
client-key: /home/jenkins/minikube-integration/21767-6063/.minikube/profiles/kubernetes-upgrade-178467/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-081586

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081586"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081586"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081586"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081586"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081586"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081586"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081586"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081586"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081586"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081586"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081586"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081586"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081586"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081586"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081586"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081586"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081586"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-081586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081586"

                                                
                                                
----------------------- debugLogs end: cilium-081586 [took: 4.024014562s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-081586" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-081586
--- SKIP: TestNetworkPlugins/group/cilium (4.19s)

                                                
                                    
Copied to clipboard