Test Report: KVM_Linux_crio 21753

                    
                      37d7943b58d61ad05591f3a5d0091cda14132c69:2025-10-17:41947
                    
                

Test fail (7/270)

Order failed test Duration
37 TestAddons/parallel/Ingress 160.7
70 TestFunctional/serial/SoftStart 1234.9
72 TestFunctional/serial/KubectlGetPods 394.04
82 TestFunctional/serial/MinikubeKubectlCmd 393.4
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 396.61
84 TestFunctional/parallel 0
175 TestPreload 129.11
x
+
TestAddons/parallel/Ingress (160.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-768633 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-768633 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-768633 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [bb0421a5-e7d4-4c0e-905f-a1e7cda960ac] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [bb0421a5-e7d4-4c0e-905f-a1e7cda960ac] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.007535534s
I1017 18:59:23.789071   79439 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-768633 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-768633 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m16.625216609s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-768633 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-768633 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.150
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-768633 -n addons-768633
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-768633 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-768633 logs -n 25: (1.514896455s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                ARGS                                                                                                                                                                                                                                                │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-361182                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-361182 │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │ 17 Oct 25 18:56 UTC │
	│ start   │ --download-only -p binary-mirror-299360 --alsologtostderr --binary-mirror http://127.0.0.1:41563 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-299360 │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │                     │
	│ delete  │ -p binary-mirror-299360                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ binary-mirror-299360 │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │ 17 Oct 25 18:56 UTC │
	│ addons  │ disable dashboard -p addons-768633                                                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-768633        │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │                     │
	│ addons  │ enable dashboard -p addons-768633                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-768633        │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │                     │
	│ start   │ -p addons-768633 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-768633        │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │ 17 Oct 25 18:58 UTC │
	│ addons  │ addons-768633 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-768633        │ jenkins │ v1.37.0 │ 17 Oct 25 18:58 UTC │ 17 Oct 25 18:58 UTC │
	│ addons  │ addons-768633 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-768633        │ jenkins │ v1.37.0 │ 17 Oct 25 18:58 UTC │ 17 Oct 25 18:59 UTC │
	│ addons  │ enable headlamp -p addons-768633 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-768633        │ jenkins │ v1.37.0 │ 17 Oct 25 18:59 UTC │ 17 Oct 25 18:59 UTC │
	│ addons  │ addons-768633 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-768633        │ jenkins │ v1.37.0 │ 17 Oct 25 18:59 UTC │ 17 Oct 25 18:59 UTC │
	│ addons  │ addons-768633 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-768633        │ jenkins │ v1.37.0 │ 17 Oct 25 18:59 UTC │ 17 Oct 25 18:59 UTC │
	│ addons  │ addons-768633 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-768633        │ jenkins │ v1.37.0 │ 17 Oct 25 18:59 UTC │ 17 Oct 25 18:59 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-768633                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-768633        │ jenkins │ v1.37.0 │ 17 Oct 25 18:59 UTC │ 17 Oct 25 18:59 UTC │
	│ addons  │ addons-768633 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-768633        │ jenkins │ v1.37.0 │ 17 Oct 25 18:59 UTC │ 17 Oct 25 18:59 UTC │
	│ addons  │ addons-768633 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-768633        │ jenkins │ v1.37.0 │ 17 Oct 25 18:59 UTC │ 17 Oct 25 18:59 UTC │
	│ ip      │ addons-768633 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-768633        │ jenkins │ v1.37.0 │ 17 Oct 25 18:59 UTC │ 17 Oct 25 18:59 UTC │
	│ addons  │ addons-768633 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-768633        │ jenkins │ v1.37.0 │ 17 Oct 25 18:59 UTC │ 17 Oct 25 18:59 UTC │
	│ addons  │ addons-768633 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-768633        │ jenkins │ v1.37.0 │ 17 Oct 25 18:59 UTC │ 17 Oct 25 18:59 UTC │
	│ ssh     │ addons-768633 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-768633        │ jenkins │ v1.37.0 │ 17 Oct 25 18:59 UTC │                     │
	│ ssh     │ addons-768633 ssh cat /opt/local-path-provisioner/pvc-ecc24882-93ae-4db8-b0d0-e3db34be0b9b_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                                                  │ addons-768633        │ jenkins │ v1.37.0 │ 17 Oct 25 18:59 UTC │ 17 Oct 25 18:59 UTC │
	│ addons  │ addons-768633 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-768633        │ jenkins │ v1.37.0 │ 17 Oct 25 18:59 UTC │ 17 Oct 25 18:59 UTC │
	│ addons  │ addons-768633 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-768633        │ jenkins │ v1.37.0 │ 17 Oct 25 18:59 UTC │ 17 Oct 25 19:00 UTC │
	│ addons  │ addons-768633 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-768633        │ jenkins │ v1.37.0 │ 17 Oct 25 19:00 UTC │ 17 Oct 25 19:00 UTC │
	│ addons  │ addons-768633 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-768633        │ jenkins │ v1.37.0 │ 17 Oct 25 19:00 UTC │ 17 Oct 25 19:00 UTC │
	│ ip      │ addons-768633 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-768633        │ jenkins │ v1.37.0 │ 17 Oct 25 19:01 UTC │ 17 Oct 25 19:01 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 18:56:27
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 18:56:27.287841   80062 out.go:360] Setting OutFile to fd 1 ...
	I1017 18:56:27.288138   80062 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:56:27.288149   80062 out.go:374] Setting ErrFile to fd 2...
	I1017 18:56:27.288156   80062 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:56:27.288398   80062 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-75534/.minikube/bin
	I1017 18:56:27.289066   80062 out.go:368] Setting JSON to false
	I1017 18:56:27.289913   80062 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5938,"bootTime":1760721449,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 18:56:27.290011   80062 start.go:141] virtualization: kvm guest
	I1017 18:56:27.292012   80062 out.go:179] * [addons-768633] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 18:56:27.293313   80062 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 18:56:27.293307   80062 notify.go:220] Checking for updates...
	I1017 18:56:27.294635   80062 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 18:56:27.296114   80062 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-75534/kubeconfig
	I1017 18:56:27.297405   80062 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-75534/.minikube
	I1017 18:56:27.298837   80062 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 18:56:27.300305   80062 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 18:56:27.301660   80062 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 18:56:27.334136   80062 out.go:179] * Using the kvm2 driver based on user configuration
	I1017 18:56:27.335535   80062 start.go:305] selected driver: kvm2
	I1017 18:56:27.335573   80062 start.go:925] validating driver "kvm2" against <nil>
	I1017 18:56:27.335587   80062 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 18:56:27.336257   80062 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 18:56:27.336336   80062 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21753-75534/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1017 18:56:27.351734   80062 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1017 18:56:27.351770   80062 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21753-75534/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1017 18:56:27.366363   80062 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1017 18:56:27.366419   80062 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 18:56:27.366749   80062 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 18:56:27.366783   80062 cni.go:84] Creating CNI manager for ""
	I1017 18:56:27.366841   80062 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1017 18:56:27.366852   80062 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1017 18:56:27.366917   80062 start.go:349] cluster config:
	{Name:addons-768633 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-768633 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1017 18:56:27.367071   80062 iso.go:125] acquiring lock: {Name:mk89d24a0bd9a0a8cf0564a4affa55e11eaff101 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 18:56:27.369008   80062 out.go:179] * Starting "addons-768633" primary control-plane node in "addons-768633" cluster
	I1017 18:56:27.370396   80062 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 18:56:27.370444   80062 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-75534/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 18:56:27.370453   80062 cache.go:58] Caching tarball of preloaded images
	I1017 18:56:27.370580   80062 preload.go:233] Found /home/jenkins/minikube-integration/21753-75534/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1017 18:56:27.370596   80062 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 18:56:27.370912   80062 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/config.json ...
	I1017 18:56:27.370936   80062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/config.json: {Name:mka5034677432ea484005d6c68e91fae5a58af09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:56:27.371116   80062 start.go:360] acquireMachinesLock for addons-768633: {Name:mke0c3abe726945d0c60793aa0bf26eb33df7fed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1017 18:56:27.371164   80062 start.go:364] duration metric: took 33.015µs to acquireMachinesLock for "addons-768633"
	I1017 18:56:27.371183   80062 start.go:93] Provisioning new machine with config: &{Name:addons-768633 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-768633 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 18:56:27.371247   80062 start.go:125] createHost starting for "" (driver="kvm2")
	I1017 18:56:27.372782   80062 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1017 18:56:27.372919   80062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 18:56:27.372964   80062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 18:56:27.386880   80062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45877
	I1017 18:56:27.387433   80062 main.go:141] libmachine: () Calling .GetVersion
	I1017 18:56:27.388011   80062 main.go:141] libmachine: Using API Version  1
	I1017 18:56:27.388045   80062 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 18:56:27.388502   80062 main.go:141] libmachine: () Calling .GetMachineName
	I1017 18:56:27.388749   80062 main.go:141] libmachine: (addons-768633) Calling .GetMachineName
	I1017 18:56:27.388960   80062 main.go:141] libmachine: (addons-768633) Calling .DriverName
	I1017 18:56:27.389172   80062 start.go:159] libmachine.API.Create for "addons-768633" (driver="kvm2")
	I1017 18:56:27.389209   80062 client.go:168] LocalClient.Create starting
	I1017 18:56:27.389262   80062 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem
	I1017 18:56:27.508847   80062 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/cert.pem
	I1017 18:56:27.896241   80062 main.go:141] libmachine: Running pre-create checks...
	I1017 18:56:27.896273   80062 main.go:141] libmachine: (addons-768633) Calling .PreCreateCheck
	I1017 18:56:27.896848   80062 main.go:141] libmachine: (addons-768633) Calling .GetConfigRaw
	I1017 18:56:27.897403   80062 main.go:141] libmachine: Creating machine...
	I1017 18:56:27.897421   80062 main.go:141] libmachine: (addons-768633) Calling .Create
	I1017 18:56:27.897635   80062 main.go:141] libmachine: (addons-768633) creating domain...
	I1017 18:56:27.897653   80062 main.go:141] libmachine: (addons-768633) creating network...
	I1017 18:56:27.899037   80062 main.go:141] libmachine: (addons-768633) DBG | found existing default network
	I1017 18:56:27.899277   80062 main.go:141] libmachine: (addons-768633) DBG | <network>
	I1017 18:56:27.899313   80062 main.go:141] libmachine: (addons-768633) DBG |   <name>default</name>
	I1017 18:56:27.899326   80062 main.go:141] libmachine: (addons-768633) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1017 18:56:27.899340   80062 main.go:141] libmachine: (addons-768633) DBG |   <forward mode='nat'>
	I1017 18:56:27.899350   80062 main.go:141] libmachine: (addons-768633) DBG |     <nat>
	I1017 18:56:27.899363   80062 main.go:141] libmachine: (addons-768633) DBG |       <port start='1024' end='65535'/>
	I1017 18:56:27.899373   80062 main.go:141] libmachine: (addons-768633) DBG |     </nat>
	I1017 18:56:27.899384   80062 main.go:141] libmachine: (addons-768633) DBG |   </forward>
	I1017 18:56:27.899396   80062 main.go:141] libmachine: (addons-768633) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1017 18:56:27.899406   80062 main.go:141] libmachine: (addons-768633) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1017 18:56:27.899418   80062 main.go:141] libmachine: (addons-768633) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1017 18:56:27.899431   80062 main.go:141] libmachine: (addons-768633) DBG |     <dhcp>
	I1017 18:56:27.899473   80062 main.go:141] libmachine: (addons-768633) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1017 18:56:27.899511   80062 main.go:141] libmachine: (addons-768633) DBG |     </dhcp>
	I1017 18:56:27.899524   80062 main.go:141] libmachine: (addons-768633) DBG |   </ip>
	I1017 18:56:27.899534   80062 main.go:141] libmachine: (addons-768633) DBG | </network>
	I1017 18:56:27.899545   80062 main.go:141] libmachine: (addons-768633) DBG | 
	I1017 18:56:27.900093   80062 main.go:141] libmachine: (addons-768633) DBG | I1017 18:56:27.899844   80090 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123550}
	I1017 18:56:27.900120   80062 main.go:141] libmachine: (addons-768633) DBG | defining private network:
	I1017 18:56:27.900128   80062 main.go:141] libmachine: (addons-768633) DBG | 
	I1017 18:56:27.900135   80062 main.go:141] libmachine: (addons-768633) DBG | <network>
	I1017 18:56:27.900144   80062 main.go:141] libmachine: (addons-768633) DBG |   <name>mk-addons-768633</name>
	I1017 18:56:27.900153   80062 main.go:141] libmachine: (addons-768633) DBG |   <dns enable='no'/>
	I1017 18:56:27.900162   80062 main.go:141] libmachine: (addons-768633) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1017 18:56:27.900167   80062 main.go:141] libmachine: (addons-768633) DBG |     <dhcp>
	I1017 18:56:27.900173   80062 main.go:141] libmachine: (addons-768633) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1017 18:56:27.900180   80062 main.go:141] libmachine: (addons-768633) DBG |     </dhcp>
	I1017 18:56:27.900185   80062 main.go:141] libmachine: (addons-768633) DBG |   </ip>
	I1017 18:56:27.900192   80062 main.go:141] libmachine: (addons-768633) DBG | </network>
	I1017 18:56:27.900197   80062 main.go:141] libmachine: (addons-768633) DBG | 
	I1017 18:56:27.906184   80062 main.go:141] libmachine: (addons-768633) DBG | creating private network mk-addons-768633 192.168.39.0/24...
	I1017 18:56:27.976753   80062 main.go:141] libmachine: (addons-768633) DBG | private network mk-addons-768633 192.168.39.0/24 created
	I1017 18:56:27.977131   80062 main.go:141] libmachine: (addons-768633) DBG | <network>
	I1017 18:56:27.977153   80062 main.go:141] libmachine: (addons-768633) DBG |   <name>mk-addons-768633</name>
	I1017 18:56:27.977164   80062 main.go:141] libmachine: (addons-768633) setting up store path in /home/jenkins/minikube-integration/21753-75534/.minikube/machines/addons-768633 ...
	I1017 18:56:27.977184   80062 main.go:141] libmachine: (addons-768633) building disk image from file:///home/jenkins/minikube-integration/21753-75534/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1017 18:56:27.977199   80062 main.go:141] libmachine: (addons-768633) DBG |   <uuid>a50da7f6-f453-4efa-ad22-1bea1c7bc21e</uuid>
	I1017 18:56:27.977227   80062 main.go:141] libmachine: (addons-768633) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I1017 18:56:27.977240   80062 main.go:141] libmachine: (addons-768633) DBG |   <mac address='52:54:00:54:15:3c'/>
	I1017 18:56:27.977249   80062 main.go:141] libmachine: (addons-768633) DBG |   <dns enable='no'/>
	I1017 18:56:27.977258   80062 main.go:141] libmachine: (addons-768633) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1017 18:56:27.977280   80062 main.go:141] libmachine: (addons-768633) Downloading /home/jenkins/minikube-integration/21753-75534/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21753-75534/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso...
	I1017 18:56:27.977384   80062 main.go:141] libmachine: (addons-768633) DBG |     <dhcp>
	I1017 18:56:27.977411   80062 main.go:141] libmachine: (addons-768633) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1017 18:56:27.977423   80062 main.go:141] libmachine: (addons-768633) DBG |     </dhcp>
	I1017 18:56:27.977431   80062 main.go:141] libmachine: (addons-768633) DBG |   </ip>
	I1017 18:56:27.977443   80062 main.go:141] libmachine: (addons-768633) DBG | </network>
	I1017 18:56:27.977474   80062 main.go:141] libmachine: (addons-768633) DBG | 
	I1017 18:56:27.977490   80062 main.go:141] libmachine: (addons-768633) DBG | I1017 18:56:27.977123   80090 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21753-75534/.minikube
	I1017 18:56:28.230844   80062 main.go:141] libmachine: (addons-768633) DBG | I1017 18:56:28.230708   80090 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21753-75534/.minikube/machines/addons-768633/id_rsa...
	I1017 18:56:28.639161   80062 main.go:141] libmachine: (addons-768633) DBG | I1017 18:56:28.638974   80090 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21753-75534/.minikube/machines/addons-768633/addons-768633.rawdisk...
	I1017 18:56:28.639196   80062 main.go:141] libmachine: (addons-768633) DBG | Writing magic tar header
	I1017 18:56:28.639228   80062 main.go:141] libmachine: (addons-768633) DBG | Writing SSH key tar header
	I1017 18:56:28.639240   80062 main.go:141] libmachine: (addons-768633) DBG | I1017 18:56:28.639116   80090 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21753-75534/.minikube/machines/addons-768633 ...
	I1017 18:56:28.639255   80062 main.go:141] libmachine: (addons-768633) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21753-75534/.minikube/machines/addons-768633
	I1017 18:56:28.639263   80062 main.go:141] libmachine: (addons-768633) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21753-75534/.minikube/machines
	I1017 18:56:28.639271   80062 main.go:141] libmachine: (addons-768633) setting executable bit set on /home/jenkins/minikube-integration/21753-75534/.minikube/machines/addons-768633 (perms=drwx------)
	I1017 18:56:28.639280   80062 main.go:141] libmachine: (addons-768633) setting executable bit set on /home/jenkins/minikube-integration/21753-75534/.minikube/machines (perms=drwxr-xr-x)
	I1017 18:56:28.639287   80062 main.go:141] libmachine: (addons-768633) setting executable bit set on /home/jenkins/minikube-integration/21753-75534/.minikube (perms=drwxr-xr-x)
	I1017 18:56:28.639296   80062 main.go:141] libmachine: (addons-768633) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21753-75534/.minikube
	I1017 18:56:28.639312   80062 main.go:141] libmachine: (addons-768633) setting executable bit set on /home/jenkins/minikube-integration/21753-75534 (perms=drwxrwxr-x)
	I1017 18:56:28.639337   80062 main.go:141] libmachine: (addons-768633) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21753-75534
	I1017 18:56:28.639354   80062 main.go:141] libmachine: (addons-768633) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1017 18:56:28.639363   80062 main.go:141] libmachine: (addons-768633) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1017 18:56:28.639370   80062 main.go:141] libmachine: (addons-768633) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1017 18:56:28.639375   80062 main.go:141] libmachine: (addons-768633) defining domain...
	I1017 18:56:28.639408   80062 main.go:141] libmachine: (addons-768633) DBG | checking permissions on dir: /home/jenkins
	I1017 18:56:28.639432   80062 main.go:141] libmachine: (addons-768633) DBG | checking permissions on dir: /home
	I1017 18:56:28.639447   80062 main.go:141] libmachine: (addons-768633) DBG | skipping /home - not owner
	I1017 18:56:28.640757   80062 main.go:141] libmachine: (addons-768633) defining domain using XML: 
	I1017 18:56:28.640778   80062 main.go:141] libmachine: (addons-768633) <domain type='kvm'>
	I1017 18:56:28.640784   80062 main.go:141] libmachine: (addons-768633)   <name>addons-768633</name>
	I1017 18:56:28.640790   80062 main.go:141] libmachine: (addons-768633)   <memory unit='MiB'>4096</memory>
	I1017 18:56:28.640795   80062 main.go:141] libmachine: (addons-768633)   <vcpu>2</vcpu>
	I1017 18:56:28.640803   80062 main.go:141] libmachine: (addons-768633)   <features>
	I1017 18:56:28.640834   80062 main.go:141] libmachine: (addons-768633)     <acpi/>
	I1017 18:56:28.640852   80062 main.go:141] libmachine: (addons-768633)     <apic/>
	I1017 18:56:28.640858   80062 main.go:141] libmachine: (addons-768633)     <pae/>
	I1017 18:56:28.640868   80062 main.go:141] libmachine: (addons-768633)   </features>
	I1017 18:56:28.640874   80062 main.go:141] libmachine: (addons-768633)   <cpu mode='host-passthrough'>
	I1017 18:56:28.640883   80062 main.go:141] libmachine: (addons-768633)   </cpu>
	I1017 18:56:28.640889   80062 main.go:141] libmachine: (addons-768633)   <os>
	I1017 18:56:28.640895   80062 main.go:141] libmachine: (addons-768633)     <type>hvm</type>
	I1017 18:56:28.640900   80062 main.go:141] libmachine: (addons-768633)     <boot dev='cdrom'/>
	I1017 18:56:28.640908   80062 main.go:141] libmachine: (addons-768633)     <boot dev='hd'/>
	I1017 18:56:28.640914   80062 main.go:141] libmachine: (addons-768633)     <bootmenu enable='no'/>
	I1017 18:56:28.640918   80062 main.go:141] libmachine: (addons-768633)   </os>
	I1017 18:56:28.640923   80062 main.go:141] libmachine: (addons-768633)   <devices>
	I1017 18:56:28.640928   80062 main.go:141] libmachine: (addons-768633)     <disk type='file' device='cdrom'>
	I1017 18:56:28.640936   80062 main.go:141] libmachine: (addons-768633)       <source file='/home/jenkins/minikube-integration/21753-75534/.minikube/machines/addons-768633/boot2docker.iso'/>
	I1017 18:56:28.640943   80062 main.go:141] libmachine: (addons-768633)       <target dev='hdc' bus='scsi'/>
	I1017 18:56:28.640949   80062 main.go:141] libmachine: (addons-768633)       <readonly/>
	I1017 18:56:28.640965   80062 main.go:141] libmachine: (addons-768633)     </disk>
	I1017 18:56:28.640974   80062 main.go:141] libmachine: (addons-768633)     <disk type='file' device='disk'>
	I1017 18:56:28.640979   80062 main.go:141] libmachine: (addons-768633)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1017 18:56:28.640988   80062 main.go:141] libmachine: (addons-768633)       <source file='/home/jenkins/minikube-integration/21753-75534/.minikube/machines/addons-768633/addons-768633.rawdisk'/>
	I1017 18:56:28.640992   80062 main.go:141] libmachine: (addons-768633)       <target dev='hda' bus='virtio'/>
	I1017 18:56:28.641025   80062 main.go:141] libmachine: (addons-768633)     </disk>
	I1017 18:56:28.641049   80062 main.go:141] libmachine: (addons-768633)     <interface type='network'>
	I1017 18:56:28.641069   80062 main.go:141] libmachine: (addons-768633)       <source network='mk-addons-768633'/>
	I1017 18:56:28.641084   80062 main.go:141] libmachine: (addons-768633)       <model type='virtio'/>
	I1017 18:56:28.641096   80062 main.go:141] libmachine: (addons-768633)     </interface>
	I1017 18:56:28.641102   80062 main.go:141] libmachine: (addons-768633)     <interface type='network'>
	I1017 18:56:28.641110   80062 main.go:141] libmachine: (addons-768633)       <source network='default'/>
	I1017 18:56:28.641114   80062 main.go:141] libmachine: (addons-768633)       <model type='virtio'/>
	I1017 18:56:28.641119   80062 main.go:141] libmachine: (addons-768633)     </interface>
	I1017 18:56:28.641125   80062 main.go:141] libmachine: (addons-768633)     <serial type='pty'>
	I1017 18:56:28.641130   80062 main.go:141] libmachine: (addons-768633)       <target port='0'/>
	I1017 18:56:28.641136   80062 main.go:141] libmachine: (addons-768633)     </serial>
	I1017 18:56:28.641141   80062 main.go:141] libmachine: (addons-768633)     <console type='pty'>
	I1017 18:56:28.641145   80062 main.go:141] libmachine: (addons-768633)       <target type='serial' port='0'/>
	I1017 18:56:28.641150   80062 main.go:141] libmachine: (addons-768633)     </console>
	I1017 18:56:28.641158   80062 main.go:141] libmachine: (addons-768633)     <rng model='virtio'>
	I1017 18:56:28.641164   80062 main.go:141] libmachine: (addons-768633)       <backend model='random'>/dev/random</backend>
	I1017 18:56:28.641170   80062 main.go:141] libmachine: (addons-768633)     </rng>
	I1017 18:56:28.641175   80062 main.go:141] libmachine: (addons-768633)   </devices>
	I1017 18:56:28.641183   80062 main.go:141] libmachine: (addons-768633) </domain>
	I1017 18:56:28.641203   80062 main.go:141] libmachine: (addons-768633) 
	I1017 18:56:28.646295   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:11:38:d6 in network default
	I1017 18:56:28.646885   80062 main.go:141] libmachine: (addons-768633) starting domain...
	I1017 18:56:28.646908   80062 main.go:141] libmachine: (addons-768633) ensuring networks are active...
	I1017 18:56:28.646919   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:28.647614   80062 main.go:141] libmachine: (addons-768633) Ensuring network default is active
	I1017 18:56:28.647953   80062 main.go:141] libmachine: (addons-768633) Ensuring network mk-addons-768633 is active
	I1017 18:56:28.648595   80062 main.go:141] libmachine: (addons-768633) getting domain XML...
	I1017 18:56:28.649658   80062 main.go:141] libmachine: (addons-768633) DBG | starting domain XML:
	I1017 18:56:28.649690   80062 main.go:141] libmachine: (addons-768633) DBG | <domain type='kvm'>
	I1017 18:56:28.649701   80062 main.go:141] libmachine: (addons-768633) DBG |   <name>addons-768633</name>
	I1017 18:56:28.649711   80062 main.go:141] libmachine: (addons-768633) DBG |   <uuid>a317fb0b-22bc-4b4c-914b-3b4610e8d8d3</uuid>
	I1017 18:56:28.649721   80062 main.go:141] libmachine: (addons-768633) DBG |   <memory unit='KiB'>4194304</memory>
	I1017 18:56:28.649729   80062 main.go:141] libmachine: (addons-768633) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I1017 18:56:28.649751   80062 main.go:141] libmachine: (addons-768633) DBG |   <vcpu placement='static'>2</vcpu>
	I1017 18:56:28.649761   80062 main.go:141] libmachine: (addons-768633) DBG |   <os>
	I1017 18:56:28.649791   80062 main.go:141] libmachine: (addons-768633) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1017 18:56:28.649814   80062 main.go:141] libmachine: (addons-768633) DBG |     <boot dev='cdrom'/>
	I1017 18:56:28.649824   80062 main.go:141] libmachine: (addons-768633) DBG |     <boot dev='hd'/>
	I1017 18:56:28.649840   80062 main.go:141] libmachine: (addons-768633) DBG |     <bootmenu enable='no'/>
	I1017 18:56:28.649849   80062 main.go:141] libmachine: (addons-768633) DBG |   </os>
	I1017 18:56:28.649862   80062 main.go:141] libmachine: (addons-768633) DBG |   <features>
	I1017 18:56:28.649871   80062 main.go:141] libmachine: (addons-768633) DBG |     <acpi/>
	I1017 18:56:28.649879   80062 main.go:141] libmachine: (addons-768633) DBG |     <apic/>
	I1017 18:56:28.649887   80062 main.go:141] libmachine: (addons-768633) DBG |     <pae/>
	I1017 18:56:28.649893   80062 main.go:141] libmachine: (addons-768633) DBG |   </features>
	I1017 18:56:28.649911   80062 main.go:141] libmachine: (addons-768633) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1017 18:56:28.649919   80062 main.go:141] libmachine: (addons-768633) DBG |   <clock offset='utc'/>
	I1017 18:56:28.649934   80062 main.go:141] libmachine: (addons-768633) DBG |   <on_poweroff>destroy</on_poweroff>
	I1017 18:56:28.649952   80062 main.go:141] libmachine: (addons-768633) DBG |   <on_reboot>restart</on_reboot>
	I1017 18:56:28.649965   80062 main.go:141] libmachine: (addons-768633) DBG |   <on_crash>destroy</on_crash>
	I1017 18:56:28.649974   80062 main.go:141] libmachine: (addons-768633) DBG |   <devices>
	I1017 18:56:28.649981   80062 main.go:141] libmachine: (addons-768633) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1017 18:56:28.649988   80062 main.go:141] libmachine: (addons-768633) DBG |     <disk type='file' device='cdrom'>
	I1017 18:56:28.649993   80062 main.go:141] libmachine: (addons-768633) DBG |       <driver name='qemu' type='raw'/>
	I1017 18:56:28.650007   80062 main.go:141] libmachine: (addons-768633) DBG |       <source file='/home/jenkins/minikube-integration/21753-75534/.minikube/machines/addons-768633/boot2docker.iso'/>
	I1017 18:56:28.650031   80062 main.go:141] libmachine: (addons-768633) DBG |       <target dev='hdc' bus='scsi'/>
	I1017 18:56:28.650048   80062 main.go:141] libmachine: (addons-768633) DBG |       <readonly/>
	I1017 18:56:28.650064   80062 main.go:141] libmachine: (addons-768633) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1017 18:56:28.650085   80062 main.go:141] libmachine: (addons-768633) DBG |     </disk>
	I1017 18:56:28.650094   80062 main.go:141] libmachine: (addons-768633) DBG |     <disk type='file' device='disk'>
	I1017 18:56:28.650099   80062 main.go:141] libmachine: (addons-768633) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1017 18:56:28.650109   80062 main.go:141] libmachine: (addons-768633) DBG |       <source file='/home/jenkins/minikube-integration/21753-75534/.minikube/machines/addons-768633/addons-768633.rawdisk'/>
	I1017 18:56:28.650116   80062 main.go:141] libmachine: (addons-768633) DBG |       <target dev='hda' bus='virtio'/>
	I1017 18:56:28.650122   80062 main.go:141] libmachine: (addons-768633) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1017 18:56:28.650128   80062 main.go:141] libmachine: (addons-768633) DBG |     </disk>
	I1017 18:56:28.650135   80062 main.go:141] libmachine: (addons-768633) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1017 18:56:28.650143   80062 main.go:141] libmachine: (addons-768633) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1017 18:56:28.650149   80062 main.go:141] libmachine: (addons-768633) DBG |     </controller>
	I1017 18:56:28.650157   80062 main.go:141] libmachine: (addons-768633) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1017 18:56:28.650165   80062 main.go:141] libmachine: (addons-768633) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1017 18:56:28.650174   80062 main.go:141] libmachine: (addons-768633) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1017 18:56:28.650182   80062 main.go:141] libmachine: (addons-768633) DBG |     </controller>
	I1017 18:56:28.650193   80062 main.go:141] libmachine: (addons-768633) DBG |     <interface type='network'>
	I1017 18:56:28.650223   80062 main.go:141] libmachine: (addons-768633) DBG |       <mac address='52:54:00:2d:10:49'/>
	I1017 18:56:28.650232   80062 main.go:141] libmachine: (addons-768633) DBG |       <source network='mk-addons-768633'/>
	I1017 18:56:28.650241   80062 main.go:141] libmachine: (addons-768633) DBG |       <model type='virtio'/>
	I1017 18:56:28.650254   80062 main.go:141] libmachine: (addons-768633) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1017 18:56:28.650275   80062 main.go:141] libmachine: (addons-768633) DBG |     </interface>
	I1017 18:56:28.650303   80062 main.go:141] libmachine: (addons-768633) DBG |     <interface type='network'>
	I1017 18:56:28.650313   80062 main.go:141] libmachine: (addons-768633) DBG |       <mac address='52:54:00:11:38:d6'/>
	I1017 18:56:28.650317   80062 main.go:141] libmachine: (addons-768633) DBG |       <source network='default'/>
	I1017 18:56:28.650323   80062 main.go:141] libmachine: (addons-768633) DBG |       <model type='virtio'/>
	I1017 18:56:28.650335   80062 main.go:141] libmachine: (addons-768633) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1017 18:56:28.650342   80062 main.go:141] libmachine: (addons-768633) DBG |     </interface>
	I1017 18:56:28.650346   80062 main.go:141] libmachine: (addons-768633) DBG |     <serial type='pty'>
	I1017 18:56:28.650355   80062 main.go:141] libmachine: (addons-768633) DBG |       <target type='isa-serial' port='0'>
	I1017 18:56:28.650369   80062 main.go:141] libmachine: (addons-768633) DBG |         <model name='isa-serial'/>
	I1017 18:56:28.650377   80062 main.go:141] libmachine: (addons-768633) DBG |       </target>
	I1017 18:56:28.650381   80062 main.go:141] libmachine: (addons-768633) DBG |     </serial>
	I1017 18:56:28.650388   80062 main.go:141] libmachine: (addons-768633) DBG |     <console type='pty'>
	I1017 18:56:28.650393   80062 main.go:141] libmachine: (addons-768633) DBG |       <target type='serial' port='0'/>
	I1017 18:56:28.650402   80062 main.go:141] libmachine: (addons-768633) DBG |     </console>
	I1017 18:56:28.650406   80062 main.go:141] libmachine: (addons-768633) DBG |     <input type='mouse' bus='ps2'/>
	I1017 18:56:28.650413   80062 main.go:141] libmachine: (addons-768633) DBG |     <input type='keyboard' bus='ps2'/>
	I1017 18:56:28.650420   80062 main.go:141] libmachine: (addons-768633) DBG |     <audio id='1' type='none'/>
	I1017 18:56:28.650432   80062 main.go:141] libmachine: (addons-768633) DBG |     <memballoon model='virtio'>
	I1017 18:56:28.650447   80062 main.go:141] libmachine: (addons-768633) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1017 18:56:28.650458   80062 main.go:141] libmachine: (addons-768633) DBG |     </memballoon>
	I1017 18:56:28.650466   80062 main.go:141] libmachine: (addons-768633) DBG |     <rng model='virtio'>
	I1017 18:56:28.650472   80062 main.go:141] libmachine: (addons-768633) DBG |       <backend model='random'>/dev/random</backend>
	I1017 18:56:28.650480   80062 main.go:141] libmachine: (addons-768633) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1017 18:56:28.650485   80062 main.go:141] libmachine: (addons-768633) DBG |     </rng>
	I1017 18:56:28.650489   80062 main.go:141] libmachine: (addons-768633) DBG |   </devices>
	I1017 18:56:28.650494   80062 main.go:141] libmachine: (addons-768633) DBG | </domain>
	I1017 18:56:28.650501   80062 main.go:141] libmachine: (addons-768633) DBG | 
	I1017 18:56:29.919603   80062 main.go:141] libmachine: (addons-768633) waiting for domain to start...
	I1017 18:56:29.920939   80062 main.go:141] libmachine: (addons-768633) domain is now running
	I1017 18:56:29.920963   80062 main.go:141] libmachine: (addons-768633) waiting for IP...
	I1017 18:56:29.921748   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:29.922491   80062 main.go:141] libmachine: (addons-768633) DBG | no network interface addresses found for domain addons-768633 (source=lease)
	I1017 18:56:29.922515   80062 main.go:141] libmachine: (addons-768633) DBG | trying to list again with source=arp
	I1017 18:56:29.922839   80062 main.go:141] libmachine: (addons-768633) DBG | unable to find current IP address of domain addons-768633 in network mk-addons-768633 (interfaces detected: [])
	I1017 18:56:29.922891   80062 main.go:141] libmachine: (addons-768633) DBG | I1017 18:56:29.922841   80090 retry.go:31] will retry after 204.780448ms: waiting for domain to come up
	I1017 18:56:30.129508   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:30.130099   80062 main.go:141] libmachine: (addons-768633) DBG | no network interface addresses found for domain addons-768633 (source=lease)
	I1017 18:56:30.130127   80062 main.go:141] libmachine: (addons-768633) DBG | trying to list again with source=arp
	I1017 18:56:30.130428   80062 main.go:141] libmachine: (addons-768633) DBG | unable to find current IP address of domain addons-768633 in network mk-addons-768633 (interfaces detected: [])
	I1017 18:56:30.130454   80062 main.go:141] libmachine: (addons-768633) DBG | I1017 18:56:30.130403   80090 retry.go:31] will retry after 272.209213ms: waiting for domain to come up
	I1017 18:56:30.404058   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:30.404840   80062 main.go:141] libmachine: (addons-768633) DBG | no network interface addresses found for domain addons-768633 (source=lease)
	I1017 18:56:30.404871   80062 main.go:141] libmachine: (addons-768633) DBG | trying to list again with source=arp
	I1017 18:56:30.405331   80062 main.go:141] libmachine: (addons-768633) DBG | unable to find current IP address of domain addons-768633 in network mk-addons-768633 (interfaces detected: [])
	I1017 18:56:30.405399   80062 main.go:141] libmachine: (addons-768633) DBG | I1017 18:56:30.405323   80090 retry.go:31] will retry after 381.510398ms: waiting for domain to come up
	I1017 18:56:30.789193   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:30.789822   80062 main.go:141] libmachine: (addons-768633) DBG | no network interface addresses found for domain addons-768633 (source=lease)
	I1017 18:56:30.789845   80062 main.go:141] libmachine: (addons-768633) DBG | trying to list again with source=arp
	I1017 18:56:30.790126   80062 main.go:141] libmachine: (addons-768633) DBG | unable to find current IP address of domain addons-768633 in network mk-addons-768633 (interfaces detected: [])
	I1017 18:56:30.790159   80062 main.go:141] libmachine: (addons-768633) DBG | I1017 18:56:30.790105   80090 retry.go:31] will retry after 376.694825ms: waiting for domain to come up
	I1017 18:56:31.168774   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:31.169378   80062 main.go:141] libmachine: (addons-768633) DBG | no network interface addresses found for domain addons-768633 (source=lease)
	I1017 18:56:31.169401   80062 main.go:141] libmachine: (addons-768633) DBG | trying to list again with source=arp
	I1017 18:56:31.169682   80062 main.go:141] libmachine: (addons-768633) DBG | unable to find current IP address of domain addons-768633 in network mk-addons-768633 (interfaces detected: [])
	I1017 18:56:31.169738   80062 main.go:141] libmachine: (addons-768633) DBG | I1017 18:56:31.169676   80090 retry.go:31] will retry after 563.107498ms: waiting for domain to come up
	I1017 18:56:31.734469   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:31.735207   80062 main.go:141] libmachine: (addons-768633) DBG | no network interface addresses found for domain addons-768633 (source=lease)
	I1017 18:56:31.735236   80062 main.go:141] libmachine: (addons-768633) DBG | trying to list again with source=arp
	I1017 18:56:31.735532   80062 main.go:141] libmachine: (addons-768633) DBG | unable to find current IP address of domain addons-768633 in network mk-addons-768633 (interfaces detected: [])
	I1017 18:56:31.735601   80062 main.go:141] libmachine: (addons-768633) DBG | I1017 18:56:31.735523   80090 retry.go:31] will retry after 573.11522ms: waiting for domain to come up
	I1017 18:56:32.310615   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:32.311295   80062 main.go:141] libmachine: (addons-768633) DBG | no network interface addresses found for domain addons-768633 (source=lease)
	I1017 18:56:32.311321   80062 main.go:141] libmachine: (addons-768633) DBG | trying to list again with source=arp
	I1017 18:56:32.311591   80062 main.go:141] libmachine: (addons-768633) DBG | unable to find current IP address of domain addons-768633 in network mk-addons-768633 (interfaces detected: [])
	I1017 18:56:32.311665   80062 main.go:141] libmachine: (addons-768633) DBG | I1017 18:56:32.311591   80090 retry.go:31] will retry after 779.563317ms: waiting for domain to come up
	I1017 18:56:33.093475   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:33.094290   80062 main.go:141] libmachine: (addons-768633) DBG | no network interface addresses found for domain addons-768633 (source=lease)
	I1017 18:56:33.094314   80062 main.go:141] libmachine: (addons-768633) DBG | trying to list again with source=arp
	I1017 18:56:33.094662   80062 main.go:141] libmachine: (addons-768633) DBG | unable to find current IP address of domain addons-768633 in network mk-addons-768633 (interfaces detected: [])
	I1017 18:56:33.094706   80062 main.go:141] libmachine: (addons-768633) DBG | I1017 18:56:33.094644   80090 retry.go:31] will retry after 1.078990055s: waiting for domain to come up
	I1017 18:56:34.175194   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:34.175935   80062 main.go:141] libmachine: (addons-768633) DBG | no network interface addresses found for domain addons-768633 (source=lease)
	I1017 18:56:34.175959   80062 main.go:141] libmachine: (addons-768633) DBG | trying to list again with source=arp
	I1017 18:56:34.176290   80062 main.go:141] libmachine: (addons-768633) DBG | unable to find current IP address of domain addons-768633 in network mk-addons-768633 (interfaces detected: [])
	I1017 18:56:34.176315   80062 main.go:141] libmachine: (addons-768633) DBG | I1017 18:56:34.176253   80090 retry.go:31] will retry after 1.442566193s: waiting for domain to come up
	I1017 18:56:35.621291   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:35.621996   80062 main.go:141] libmachine: (addons-768633) DBG | no network interface addresses found for domain addons-768633 (source=lease)
	I1017 18:56:35.622018   80062 main.go:141] libmachine: (addons-768633) DBG | trying to list again with source=arp
	I1017 18:56:35.622388   80062 main.go:141] libmachine: (addons-768633) DBG | unable to find current IP address of domain addons-768633 in network mk-addons-768633 (interfaces detected: [])
	I1017 18:56:35.622430   80062 main.go:141] libmachine: (addons-768633) DBG | I1017 18:56:35.622360   80090 retry.go:31] will retry after 2.255766993s: waiting for domain to come up
	I1017 18:56:37.881028   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:37.881745   80062 main.go:141] libmachine: (addons-768633) DBG | no network interface addresses found for domain addons-768633 (source=lease)
	I1017 18:56:37.881776   80062 main.go:141] libmachine: (addons-768633) DBG | trying to list again with source=arp
	I1017 18:56:37.882078   80062 main.go:141] libmachine: (addons-768633) DBG | unable to find current IP address of domain addons-768633 in network mk-addons-768633 (interfaces detected: [])
	I1017 18:56:37.882103   80062 main.go:141] libmachine: (addons-768633) DBG | I1017 18:56:37.882028   80090 retry.go:31] will retry after 2.010825085s: waiting for domain to come up
	I1017 18:56:39.894776   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:39.895478   80062 main.go:141] libmachine: (addons-768633) DBG | no network interface addresses found for domain addons-768633 (source=lease)
	I1017 18:56:39.895510   80062 main.go:141] libmachine: (addons-768633) DBG | trying to list again with source=arp
	I1017 18:56:39.895779   80062 main.go:141] libmachine: (addons-768633) DBG | unable to find current IP address of domain addons-768633 in network mk-addons-768633 (interfaces detected: [])
	I1017 18:56:39.895799   80062 main.go:141] libmachine: (addons-768633) DBG | I1017 18:56:39.895751   80090 retry.go:31] will retry after 2.508823344s: waiting for domain to come up
	I1017 18:56:42.406673   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:42.407389   80062 main.go:141] libmachine: (addons-768633) DBG | no network interface addresses found for domain addons-768633 (source=lease)
	I1017 18:56:42.407418   80062 main.go:141] libmachine: (addons-768633) DBG | trying to list again with source=arp
	I1017 18:56:42.407792   80062 main.go:141] libmachine: (addons-768633) DBG | unable to find current IP address of domain addons-768633 in network mk-addons-768633 (interfaces detected: [])
	I1017 18:56:42.407815   80062 main.go:141] libmachine: (addons-768633) DBG | I1017 18:56:42.407770   80090 retry.go:31] will retry after 4.475151879s: waiting for domain to come up
	I1017 18:56:46.888588   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:46.889241   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has current primary IP address 192.168.39.150 and MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:46.889262   80062 main.go:141] libmachine: (addons-768633) found domain IP: 192.168.39.150
	I1017 18:56:46.889316   80062 main.go:141] libmachine: (addons-768633) reserving static IP address...
	I1017 18:56:46.889768   80062 main.go:141] libmachine: (addons-768633) DBG | unable to find host DHCP lease matching {name: "addons-768633", mac: "52:54:00:2d:10:49", ip: "192.168.39.150"} in network mk-addons-768633
	I1017 18:56:47.091012   80062 main.go:141] libmachine: (addons-768633) reserved static IP address 192.168.39.150 for domain addons-768633
	I1017 18:56:47.091040   80062 main.go:141] libmachine: (addons-768633) waiting for SSH...
	I1017 18:56:47.091050   80062 main.go:141] libmachine: (addons-768633) DBG | Getting to WaitForSSH function...
	I1017 18:56:47.094292   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:47.094758   80062 main.go:141] libmachine: (addons-768633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:10:49", ip: ""} in network mk-addons-768633: {Iface:virbr1 ExpiryTime:2025-10-17 19:56:44 +0000 UTC Type:0 Mac:52:54:00:2d:10:49 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2d:10:49}
	I1017 18:56:47.094785   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined IP address 192.168.39.150 and MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:47.094978   80062 main.go:141] libmachine: (addons-768633) DBG | Using SSH client type: external
	I1017 18:56:47.095003   80062 main.go:141] libmachine: (addons-768633) DBG | Using SSH private key: /home/jenkins/minikube-integration/21753-75534/.minikube/machines/addons-768633/id_rsa (-rw-------)
	I1017 18:56:47.095049   80062 main.go:141] libmachine: (addons-768633) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.150 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21753-75534/.minikube/machines/addons-768633/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1017 18:56:47.095077   80062 main.go:141] libmachine: (addons-768633) DBG | About to run SSH command:
	I1017 18:56:47.095089   80062 main.go:141] libmachine: (addons-768633) DBG | exit 0
	I1017 18:56:47.230481   80062 main.go:141] libmachine: (addons-768633) DBG | SSH cmd err, output: <nil>: 
	I1017 18:56:47.230761   80062 main.go:141] libmachine: (addons-768633) domain creation complete
	I1017 18:56:47.231159   80062 main.go:141] libmachine: (addons-768633) Calling .GetConfigRaw
	I1017 18:56:47.231874   80062 main.go:141] libmachine: (addons-768633) Calling .DriverName
	I1017 18:56:47.232070   80062 main.go:141] libmachine: (addons-768633) Calling .DriverName
	I1017 18:56:47.232266   80062 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1017 18:56:47.232292   80062 main.go:141] libmachine: (addons-768633) Calling .GetState
	I1017 18:56:47.233565   80062 main.go:141] libmachine: Detecting operating system of created instance...
	I1017 18:56:47.233629   80062 main.go:141] libmachine: Waiting for SSH to be available...
	I1017 18:56:47.233685   80062 main.go:141] libmachine: Getting to WaitForSSH function...
	I1017 18:56:47.233705   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHHostname
	I1017 18:56:47.236544   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:47.237067   80062 main.go:141] libmachine: (addons-768633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:10:49", ip: ""} in network mk-addons-768633: {Iface:virbr1 ExpiryTime:2025-10-17 19:56:44 +0000 UTC Type:0 Mac:52:54:00:2d:10:49 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-768633 Clientid:01:52:54:00:2d:10:49}
	I1017 18:56:47.237100   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined IP address 192.168.39.150 and MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:47.237317   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHPort
	I1017 18:56:47.237497   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHKeyPath
	I1017 18:56:47.237645   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHKeyPath
	I1017 18:56:47.237814   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHUsername
	I1017 18:56:47.238035   80062 main.go:141] libmachine: Using SSH client type: native
	I1017 18:56:47.238310   80062 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I1017 18:56:47.238324   80062 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1017 18:56:47.356720   80062 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 18:56:47.356745   80062 main.go:141] libmachine: Detecting the provisioner...
	I1017 18:56:47.356754   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHHostname
	I1017 18:56:47.359752   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:47.360174   80062 main.go:141] libmachine: (addons-768633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:10:49", ip: ""} in network mk-addons-768633: {Iface:virbr1 ExpiryTime:2025-10-17 19:56:44 +0000 UTC Type:0 Mac:52:54:00:2d:10:49 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-768633 Clientid:01:52:54:00:2d:10:49}
	I1017 18:56:47.360197   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined IP address 192.168.39.150 and MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:47.360416   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHPort
	I1017 18:56:47.360632   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHKeyPath
	I1017 18:56:47.360779   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHKeyPath
	I1017 18:56:47.360897   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHUsername
	I1017 18:56:47.361054   80062 main.go:141] libmachine: Using SSH client type: native
	I1017 18:56:47.361253   80062 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I1017 18:56:47.361264   80062 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1017 18:56:47.477502   80062 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1017 18:56:47.477616   80062 main.go:141] libmachine: found compatible host: buildroot
	I1017 18:56:47.477628   80062 main.go:141] libmachine: Provisioning with buildroot...
	I1017 18:56:47.477636   80062 main.go:141] libmachine: (addons-768633) Calling .GetMachineName
	I1017 18:56:47.478132   80062 buildroot.go:166] provisioning hostname "addons-768633"
	I1017 18:56:47.478167   80062 main.go:141] libmachine: (addons-768633) Calling .GetMachineName
	I1017 18:56:47.478397   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHHostname
	I1017 18:56:47.481403   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:47.481899   80062 main.go:141] libmachine: (addons-768633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:10:49", ip: ""} in network mk-addons-768633: {Iface:virbr1 ExpiryTime:2025-10-17 19:56:44 +0000 UTC Type:0 Mac:52:54:00:2d:10:49 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-768633 Clientid:01:52:54:00:2d:10:49}
	I1017 18:56:47.481937   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined IP address 192.168.39.150 and MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:47.482166   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHPort
	I1017 18:56:47.482381   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHKeyPath
	I1017 18:56:47.482567   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHKeyPath
	I1017 18:56:47.482705   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHUsername
	I1017 18:56:47.482891   80062 main.go:141] libmachine: Using SSH client type: native
	I1017 18:56:47.483114   80062 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I1017 18:56:47.483127   80062 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-768633 && echo "addons-768633" | sudo tee /etc/hostname
	I1017 18:56:47.616674   80062 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-768633
	
	I1017 18:56:47.616703   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHHostname
	I1017 18:56:47.619878   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:47.620250   80062 main.go:141] libmachine: (addons-768633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:10:49", ip: ""} in network mk-addons-768633: {Iface:virbr1 ExpiryTime:2025-10-17 19:56:44 +0000 UTC Type:0 Mac:52:54:00:2d:10:49 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-768633 Clientid:01:52:54:00:2d:10:49}
	I1017 18:56:47.620276   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined IP address 192.168.39.150 and MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:47.620460   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHPort
	I1017 18:56:47.620658   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHKeyPath
	I1017 18:56:47.620805   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHKeyPath
	I1017 18:56:47.620918   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHUsername
	I1017 18:56:47.621149   80062 main.go:141] libmachine: Using SSH client type: native
	I1017 18:56:47.621361   80062 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I1017 18:56:47.621379   80062 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-768633' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-768633/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-768633' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 18:56:47.748045   80062 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 18:56:47.748074   80062 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21753-75534/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-75534/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-75534/.minikube}
	I1017 18:56:47.748092   80062 buildroot.go:174] setting up certificates
	I1017 18:56:47.748115   80062 provision.go:84] configureAuth start
	I1017 18:56:47.748125   80062 main.go:141] libmachine: (addons-768633) Calling .GetMachineName
	I1017 18:56:47.748407   80062 main.go:141] libmachine: (addons-768633) Calling .GetIP
	I1017 18:56:47.751544   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:47.751973   80062 main.go:141] libmachine: (addons-768633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:10:49", ip: ""} in network mk-addons-768633: {Iface:virbr1 ExpiryTime:2025-10-17 19:56:44 +0000 UTC Type:0 Mac:52:54:00:2d:10:49 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-768633 Clientid:01:52:54:00:2d:10:49}
	I1017 18:56:47.751995   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined IP address 192.168.39.150 and MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:47.752150   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHHostname
	I1017 18:56:47.754511   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:47.754988   80062 main.go:141] libmachine: (addons-768633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:10:49", ip: ""} in network mk-addons-768633: {Iface:virbr1 ExpiryTime:2025-10-17 19:56:44 +0000 UTC Type:0 Mac:52:54:00:2d:10:49 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-768633 Clientid:01:52:54:00:2d:10:49}
	I1017 18:56:47.755019   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined IP address 192.168.39.150 and MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:47.755152   80062 provision.go:143] copyHostCerts
	I1017 18:56:47.755240   80062 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-75534/.minikube/ca.pem (1082 bytes)
	I1017 18:56:47.755399   80062 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-75534/.minikube/cert.pem (1123 bytes)
	I1017 18:56:47.755487   80062 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-75534/.minikube/key.pem (1679 bytes)
	I1017 18:56:47.755571   80062 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-75534/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca-key.pem org=jenkins.addons-768633 san=[127.0.0.1 192.168.39.150 addons-768633 localhost minikube]
	I1017 18:56:47.937071   80062 provision.go:177] copyRemoteCerts
	I1017 18:56:47.937162   80062 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 18:56:47.937196   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHHostname
	I1017 18:56:47.940494   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:47.940890   80062 main.go:141] libmachine: (addons-768633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:10:49", ip: ""} in network mk-addons-768633: {Iface:virbr1 ExpiryTime:2025-10-17 19:56:44 +0000 UTC Type:0 Mac:52:54:00:2d:10:49 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-768633 Clientid:01:52:54:00:2d:10:49}
	I1017 18:56:47.940934   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined IP address 192.168.39.150 and MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:47.941174   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHPort
	I1017 18:56:47.941382   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHKeyPath
	I1017 18:56:47.941536   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHUsername
	I1017 18:56:47.941747   80062 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/addons-768633/id_rsa Username:docker}
	I1017 18:56:48.030909   80062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 18:56:48.061390   80062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1017 18:56:48.091431   80062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1017 18:56:48.120973   80062 provision.go:87] duration metric: took 372.839918ms to configureAuth
	I1017 18:56:48.121004   80062 buildroot.go:189] setting minikube options for container-runtime
	I1017 18:56:48.121182   80062 config.go:182] Loaded profile config "addons-768633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:56:48.121286   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHHostname
	I1017 18:56:48.124444   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:48.124989   80062 main.go:141] libmachine: (addons-768633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:10:49", ip: ""} in network mk-addons-768633: {Iface:virbr1 ExpiryTime:2025-10-17 19:56:44 +0000 UTC Type:0 Mac:52:54:00:2d:10:49 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-768633 Clientid:01:52:54:00:2d:10:49}
	I1017 18:56:48.125019   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined IP address 192.168.39.150 and MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:48.125292   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHPort
	I1017 18:56:48.125498   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHKeyPath
	I1017 18:56:48.125716   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHKeyPath
	I1017 18:56:48.125878   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHUsername
	I1017 18:56:48.126049   80062 main.go:141] libmachine: Using SSH client type: native
	I1017 18:56:48.126320   80062 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I1017 18:56:48.126351   80062 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 18:56:48.378807   80062 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 18:56:48.378841   80062 main.go:141] libmachine: Checking connection to Docker...
	I1017 18:56:48.378849   80062 main.go:141] libmachine: (addons-768633) Calling .GetURL
	I1017 18:56:48.380325   80062 main.go:141] libmachine: (addons-768633) DBG | using libvirt version 8000000
	I1017 18:56:48.383275   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:48.383854   80062 main.go:141] libmachine: (addons-768633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:10:49", ip: ""} in network mk-addons-768633: {Iface:virbr1 ExpiryTime:2025-10-17 19:56:44 +0000 UTC Type:0 Mac:52:54:00:2d:10:49 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-768633 Clientid:01:52:54:00:2d:10:49}
	I1017 18:56:48.383878   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined IP address 192.168.39.150 and MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:48.384108   80062 main.go:141] libmachine: Docker is up and running!
	I1017 18:56:48.384150   80062 main.go:141] libmachine: Reticulating splines...
	I1017 18:56:48.384164   80062 client.go:171] duration metric: took 20.994943651s to LocalClient.Create
	I1017 18:56:48.384193   80062 start.go:167] duration metric: took 20.995034072s to libmachine.API.Create "addons-768633"
	I1017 18:56:48.384220   80062 start.go:293] postStartSetup for "addons-768633" (driver="kvm2")
	I1017 18:56:48.384234   80062 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 18:56:48.384261   80062 main.go:141] libmachine: (addons-768633) Calling .DriverName
	I1017 18:56:48.384520   80062 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 18:56:48.384547   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHHostname
	I1017 18:56:48.387113   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:48.387459   80062 main.go:141] libmachine: (addons-768633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:10:49", ip: ""} in network mk-addons-768633: {Iface:virbr1 ExpiryTime:2025-10-17 19:56:44 +0000 UTC Type:0 Mac:52:54:00:2d:10:49 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-768633 Clientid:01:52:54:00:2d:10:49}
	I1017 18:56:48.387486   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined IP address 192.168.39.150 and MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:48.387649   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHPort
	I1017 18:56:48.387833   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHKeyPath
	I1017 18:56:48.388006   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHUsername
	I1017 18:56:48.388153   80062 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/addons-768633/id_rsa Username:docker}
	I1017 18:56:48.477765   80062 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 18:56:48.483156   80062 info.go:137] Remote host: Buildroot 2025.02
	I1017 18:56:48.483193   80062 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-75534/.minikube/addons for local assets ...
	I1017 18:56:48.483284   80062 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-75534/.minikube/files for local assets ...
	I1017 18:56:48.483322   80062 start.go:296] duration metric: took 99.093467ms for postStartSetup
	I1017 18:56:48.483372   80062 main.go:141] libmachine: (addons-768633) Calling .GetConfigRaw
	I1017 18:56:48.484159   80062 main.go:141] libmachine: (addons-768633) Calling .GetIP
	I1017 18:56:48.487347   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:48.487775   80062 main.go:141] libmachine: (addons-768633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:10:49", ip: ""} in network mk-addons-768633: {Iface:virbr1 ExpiryTime:2025-10-17 19:56:44 +0000 UTC Type:0 Mac:52:54:00:2d:10:49 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-768633 Clientid:01:52:54:00:2d:10:49}
	I1017 18:56:48.487805   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined IP address 192.168.39.150 and MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:48.488135   80062 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/config.json ...
	I1017 18:56:48.488347   80062 start.go:128] duration metric: took 21.117085156s to createHost
	I1017 18:56:48.488373   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHHostname
	I1017 18:56:48.490730   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:48.491031   80062 main.go:141] libmachine: (addons-768633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:10:49", ip: ""} in network mk-addons-768633: {Iface:virbr1 ExpiryTime:2025-10-17 19:56:44 +0000 UTC Type:0 Mac:52:54:00:2d:10:49 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-768633 Clientid:01:52:54:00:2d:10:49}
	I1017 18:56:48.491059   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined IP address 192.168.39.150 and MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:48.491222   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHPort
	I1017 18:56:48.491425   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHKeyPath
	I1017 18:56:48.491588   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHKeyPath
	I1017 18:56:48.491709   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHUsername
	I1017 18:56:48.491878   80062 main.go:141] libmachine: Using SSH client type: native
	I1017 18:56:48.492068   80062 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I1017 18:56:48.492077   80062 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1017 18:56:48.612865   80062 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760727408.574348202
	
	I1017 18:56:48.612890   80062 fix.go:216] guest clock: 1760727408.574348202
	I1017 18:56:48.612897   80062 fix.go:229] Guest: 2025-10-17 18:56:48.574348202 +0000 UTC Remote: 2025-10-17 18:56:48.48836124 +0000 UTC m=+21.238452799 (delta=85.986962ms)
	I1017 18:56:48.612957   80062 fix.go:200] guest clock delta is within tolerance: 85.986962ms
	I1017 18:56:48.612968   80062 start.go:83] releasing machines lock for "addons-768633", held for 21.241794272s
	I1017 18:56:48.612998   80062 main.go:141] libmachine: (addons-768633) Calling .DriverName
	I1017 18:56:48.613284   80062 main.go:141] libmachine: (addons-768633) Calling .GetIP
	I1017 18:56:48.616529   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:48.616877   80062 main.go:141] libmachine: (addons-768633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:10:49", ip: ""} in network mk-addons-768633: {Iface:virbr1 ExpiryTime:2025-10-17 19:56:44 +0000 UTC Type:0 Mac:52:54:00:2d:10:49 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-768633 Clientid:01:52:54:00:2d:10:49}
	I1017 18:56:48.616909   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined IP address 192.168.39.150 and MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:48.617079   80062 main.go:141] libmachine: (addons-768633) Calling .DriverName
	I1017 18:56:48.617665   80062 main.go:141] libmachine: (addons-768633) Calling .DriverName
	I1017 18:56:48.617867   80062 main.go:141] libmachine: (addons-768633) Calling .DriverName
	I1017 18:56:48.617988   80062 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 18:56:48.618032   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHHostname
	I1017 18:56:48.618073   80062 ssh_runner.go:195] Run: cat /version.json
	I1017 18:56:48.618102   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHHostname
	I1017 18:56:48.620992   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:48.621112   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:48.621372   80062 main.go:141] libmachine: (addons-768633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:10:49", ip: ""} in network mk-addons-768633: {Iface:virbr1 ExpiryTime:2025-10-17 19:56:44 +0000 UTC Type:0 Mac:52:54:00:2d:10:49 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-768633 Clientid:01:52:54:00:2d:10:49}
	I1017 18:56:48.621403   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined IP address 192.168.39.150 and MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:48.621571   80062 main.go:141] libmachine: (addons-768633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:10:49", ip: ""} in network mk-addons-768633: {Iface:virbr1 ExpiryTime:2025-10-17 19:56:44 +0000 UTC Type:0 Mac:52:54:00:2d:10:49 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-768633 Clientid:01:52:54:00:2d:10:49}
	I1017 18:56:48.621597   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHPort
	I1017 18:56:48.621600   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined IP address 192.168.39.150 and MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:48.621801   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHKeyPath
	I1017 18:56:48.621837   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHPort
	I1017 18:56:48.622021   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHUsername
	I1017 18:56:48.622045   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHKeyPath
	I1017 18:56:48.622180   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHUsername
	I1017 18:56:48.622233   80062 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/addons-768633/id_rsa Username:docker}
	I1017 18:56:48.622345   80062 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/addons-768633/id_rsa Username:docker}
	I1017 18:56:48.718395   80062 ssh_runner.go:195] Run: systemctl --version
	I1017 18:56:48.747112   80062 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 18:56:48.904265   80062 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 18:56:48.911690   80062 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 18:56:48.911773   80062 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 18:56:48.933399   80062 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1017 18:56:48.933428   80062 start.go:495] detecting cgroup driver to use...
	I1017 18:56:48.933505   80062 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 18:56:48.953988   80062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 18:56:48.971399   80062 docker.go:218] disabling cri-docker service (if available) ...
	I1017 18:56:48.971471   80062 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 18:56:48.989150   80062 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 18:56:49.005277   80062 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 18:56:49.147647   80062 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 18:56:49.356007   80062 docker.go:234] disabling docker service ...
	I1017 18:56:49.356072   80062 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 18:56:49.372461   80062 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 18:56:49.387180   80062 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 18:56:49.543431   80062 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 18:56:49.686042   80062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 18:56:49.703247   80062 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 18:56:49.729714   80062 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 18:56:49.729788   80062 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 18:56:49.746541   80062 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 18:56:49.746626   80062 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 18:56:49.761105   80062 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 18:56:49.777837   80062 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 18:56:49.794296   80062 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 18:56:49.810682   80062 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 18:56:49.826398   80062 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 18:56:49.851249   80062 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 18:56:49.865576   80062 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 18:56:49.877573   80062 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1017 18:56:49.877629   80062 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1017 18:56:49.902517   80062 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 18:56:49.916818   80062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 18:56:50.057295   80062 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 18:56:50.340453   80062 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 18:56:50.340585   80062 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 18:56:50.346468   80062 start.go:563] Will wait 60s for crictl version
	I1017 18:56:50.346545   80062 ssh_runner.go:195] Run: which crictl
	I1017 18:56:50.351431   80062 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1017 18:56:50.400305   80062 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1017 18:56:50.400440   80062 ssh_runner.go:195] Run: crio --version
	I1017 18:56:50.432784   80062 ssh_runner.go:195] Run: crio --version
	I1017 18:56:50.546666   80062 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1017 18:56:50.600777   80062 main.go:141] libmachine: (addons-768633) Calling .GetIP
	I1017 18:56:50.603856   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:50.604234   80062 main.go:141] libmachine: (addons-768633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:10:49", ip: ""} in network mk-addons-768633: {Iface:virbr1 ExpiryTime:2025-10-17 19:56:44 +0000 UTC Type:0 Mac:52:54:00:2d:10:49 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-768633 Clientid:01:52:54:00:2d:10:49}
	I1017 18:56:50.604272   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined IP address 192.168.39.150 and MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:56:50.604536   80062 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1017 18:56:50.609518   80062 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 18:56:50.625791   80062 kubeadm.go:883] updating cluster {Name:addons-768633 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-768633 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 18:56:50.625917   80062 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 18:56:50.625961   80062 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 18:56:50.661962   80062 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1017 18:56:50.662074   80062 ssh_runner.go:195] Run: which lz4
	I1017 18:56:50.666646   80062 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1017 18:56:50.671765   80062 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1017 18:56:50.671800   80062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1017 18:56:52.242058   80062 crio.go:462] duration metric: took 1.575445763s to copy over tarball
	I1017 18:56:52.242151   80062 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1017 18:56:53.884610   80062 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.642422521s)
	I1017 18:56:53.884640   80062 crio.go:469] duration metric: took 1.642546317s to extract the tarball
	I1017 18:56:53.884648   80062 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1017 18:56:53.926266   80062 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 18:56:53.971795   80062 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 18:56:53.971829   80062 cache_images.go:85] Images are preloaded, skipping loading
	I1017 18:56:53.971840   80062 kubeadm.go:934] updating node { 192.168.39.150 8443 v1.34.1 crio true true} ...
	I1017 18:56:53.971987   80062 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-768633 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.150
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-768633 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 18:56:53.972077   80062 ssh_runner.go:195] Run: crio config
	I1017 18:56:54.021303   80062 cni.go:84] Creating CNI manager for ""
	I1017 18:56:54.021328   80062 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1017 18:56:54.021350   80062 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 18:56:54.021391   80062 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.150 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-768633 NodeName:addons-768633 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.150"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.150 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 18:56:54.021520   80062 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.150
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-768633"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.150"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.150"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 18:56:54.021609   80062 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 18:56:54.034589   80062 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 18:56:54.034676   80062 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 18:56:54.046441   80062 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1017 18:56:54.068070   80062 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 18:56:54.089421   80062 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1017 18:56:54.110723   80062 ssh_runner.go:195] Run: grep 192.168.39.150	control-plane.minikube.internal$ /etc/hosts
	I1017 18:56:54.115042   80062 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.150	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 18:56:54.130492   80062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 18:56:54.271126   80062 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 18:56:54.293914   80062 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633 for IP: 192.168.39.150
	I1017 18:56:54.293950   80062 certs.go:195] generating shared ca certs ...
	I1017 18:56:54.293976   80062 certs.go:227] acquiring lock for ca certs: {Name:mka410ab7d3b92eaaa0d0545223807c0ba196baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:56:54.294182   80062 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-75534/.minikube/ca.key
	I1017 18:56:54.376968   80062 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-75534/.minikube/ca.crt ...
	I1017 18:56:54.377001   80062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-75534/.minikube/ca.crt: {Name:mk6859f9b6f5ee29fbe10c65aab2c3b582ef56be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:56:54.377236   80062 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-75534/.minikube/ca.key ...
	I1017 18:56:54.377256   80062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-75534/.minikube/ca.key: {Name:mkd4e8907bb82135103ba7c1c93ed29534582077 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:56:54.377374   80062 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-75534/.minikube/proxy-client-ca.key
	I1017 18:56:54.486920   80062 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-75534/.minikube/proxy-client-ca.crt ...
	I1017 18:56:54.486957   80062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-75534/.minikube/proxy-client-ca.crt: {Name:mkfcb3d195f52f792328c846b7c9cf9f9aecb6d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:56:54.487176   80062 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-75534/.minikube/proxy-client-ca.key ...
	I1017 18:56:54.487194   80062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-75534/.minikube/proxy-client-ca.key: {Name:mk98fae49a6dbfaed694344401000003b7c70595 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:56:54.487309   80062 certs.go:257] generating profile certs ...
	I1017 18:56:54.487393   80062 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.key
	I1017 18:56:54.487423   80062 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt with IP's: []
	I1017 18:56:55.025829   80062 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt ...
	I1017 18:56:55.025865   80062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: {Name:mke8b64233ec3260f4ecdfa7e8ef76db3c3c8df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:56:55.026074   80062 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.key ...
	I1017 18:56:55.026087   80062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.key: {Name:mkcaefd709fa1ae84475388b2bd3de44a37afb0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:56:55.026173   80062 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/apiserver.key.7e6ec752
	I1017 18:56:55.026193   80062 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/apiserver.crt.7e6ec752 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.150]
	I1017 18:56:55.096196   80062 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/apiserver.crt.7e6ec752 ...
	I1017 18:56:55.096227   80062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/apiserver.crt.7e6ec752: {Name:mkcb57c85f1de8f815f92d27461aefe8ece6bba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:56:55.096393   80062 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/apiserver.key.7e6ec752 ...
	I1017 18:56:55.096409   80062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/apiserver.key.7e6ec752: {Name:mkffa111357ba8efeed726323e147e48904adb5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:56:55.096480   80062 certs.go:382] copying /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/apiserver.crt.7e6ec752 -> /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/apiserver.crt
	I1017 18:56:55.096568   80062 certs.go:386] copying /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/apiserver.key.7e6ec752 -> /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/apiserver.key
	I1017 18:56:55.096618   80062 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/proxy-client.key
	I1017 18:56:55.096633   80062 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/proxy-client.crt with IP's: []
	I1017 18:56:55.268081   80062 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/proxy-client.crt ...
	I1017 18:56:55.268126   80062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/proxy-client.crt: {Name:mk0a9104c5d491044cf52e215ed8c2e8945a919d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:56:55.268361   80062 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/proxy-client.key ...
	I1017 18:56:55.268383   80062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/proxy-client.key: {Name:mk614cf51ab861fd099031b83d1492d54d40117c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:56:55.268646   80062 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca-key.pem (1679 bytes)
	I1017 18:56:55.268691   80062 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem (1082 bytes)
	I1017 18:56:55.268718   80062 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/cert.pem (1123 bytes)
	I1017 18:56:55.268739   80062 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/key.pem (1679 bytes)
	I1017 18:56:55.269370   80062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 18:56:55.309284   80062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 18:56:55.342309   80062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 18:56:55.374545   80062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 18:56:55.406939   80062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1017 18:56:55.440441   80062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 18:56:55.473119   80062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 18:56:55.505610   80062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 18:56:55.537815   80062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 18:56:55.570035   80062 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 18:56:55.591634   80062 ssh_runner.go:195] Run: openssl version
	I1017 18:56:55.598843   80062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 18:56:55.613804   80062 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 18:56:55.619619   80062 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1017 18:56:55.619684   80062 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 18:56:55.627759   80062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 18:56:55.646120   80062 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 18:56:55.651520   80062 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1017 18:56:55.651593   80062 kubeadm.go:400] StartCluster: {Name:addons-768633 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-768633 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 18:56:55.651700   80062 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 18:56:55.651773   80062 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 18:56:55.697122   80062 cri.go:89] found id: ""
	I1017 18:56:55.697212   80062 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 18:56:55.711075   80062 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1017 18:56:55.724439   80062 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1017 18:56:55.737662   80062 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1017 18:56:55.737683   80062 kubeadm.go:157] found existing configuration files:
	
	I1017 18:56:55.737740   80062 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1017 18:56:55.749378   80062 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1017 18:56:55.749457   80062 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1017 18:56:55.761680   80062 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1017 18:56:55.773325   80062 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1017 18:56:55.773392   80062 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1017 18:56:55.787083   80062 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1017 18:56:55.799436   80062 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1017 18:56:55.799504   80062 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1017 18:56:55.812638   80062 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1017 18:56:55.824242   80062 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1017 18:56:55.824320   80062 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1017 18:56:55.836314   80062 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1017 18:56:55.889996   80062 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1017 18:56:55.890057   80062 kubeadm.go:318] [preflight] Running pre-flight checks
	I1017 18:56:56.026021   80062 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1017 18:56:56.026157   80062 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1017 18:56:56.026308   80062 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1017 18:56:56.039863   80062 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1017 18:56:56.146671   80062 out.go:252]   - Generating certificates and keys ...
	I1017 18:56:56.146830   80062 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1017 18:56:56.146922   80062 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1017 18:56:56.366280   80062 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1017 18:56:56.661068   80062 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1017 18:56:56.883664   80062 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1017 18:56:57.084749   80062 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1017 18:56:57.378432   80062 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1017 18:56:57.378625   80062 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-768633 localhost] and IPs [192.168.39.150 127.0.0.1 ::1]
	I1017 18:56:57.647766   80062 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1017 18:56:57.648020   80062 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-768633 localhost] and IPs [192.168.39.150 127.0.0.1 ::1]
	I1017 18:56:57.783641   80062 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1017 18:56:58.136173   80062 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1017 18:56:58.345719   80062 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1017 18:56:58.345827   80062 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1017 18:56:58.512463   80062 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1017 18:56:58.612541   80062 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1017 18:56:58.906238   80062 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1017 18:56:59.155128   80062 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1017 18:56:59.281397   80062 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1017 18:56:59.282078   80062 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1017 18:56:59.284281   80062 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1017 18:56:59.287527   80062 out.go:252]   - Booting up control plane ...
	I1017 18:56:59.287660   80062 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1017 18:56:59.287786   80062 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1017 18:56:59.287908   80062 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1017 18:56:59.304625   80062 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1017 18:56:59.305439   80062 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1017 18:56:59.313014   80062 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1017 18:56:59.313957   80062 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1017 18:56:59.314040   80062 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1017 18:56:59.481453   80062 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1017 18:56:59.482900   80062 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1017 18:57:00.483536   80062 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001631261s
	I1017 18:57:00.486588   80062 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1017 18:57:00.486700   80062 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.150:8443/livez
	I1017 18:57:00.486850   80062 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1017 18:57:00.486976   80062 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1017 18:57:03.469491   80062 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.984660011s
	I1017 18:57:04.220345   80062 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.736425456s
	I1017 18:57:06.485576   80062 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.00280108s
	I1017 18:57:06.498365   80062 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1017 18:57:06.516085   80062 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1017 18:57:06.528767   80062 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1017 18:57:06.529237   80062 kubeadm.go:318] [mark-control-plane] Marking the node addons-768633 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1017 18:57:06.548917   80062 kubeadm.go:318] [bootstrap-token] Using token: pb8i6h.iob1foy87rfz9rk2
	I1017 18:57:06.550407   80062 out.go:252]   - Configuring RBAC rules ...
	I1017 18:57:06.550599   80062 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1017 18:57:06.559270   80062 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1017 18:57:06.568132   80062 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1017 18:57:06.574169   80062 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1017 18:57:06.577063   80062 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1017 18:57:06.580513   80062 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1017 18:57:06.892530   80062 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1017 18:57:07.366975   80062 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1017 18:57:07.894628   80062 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1017 18:57:07.896342   80062 kubeadm.go:318] 
	I1017 18:57:07.896410   80062 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1017 18:57:07.896416   80062 kubeadm.go:318] 
	I1017 18:57:07.896485   80062 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1017 18:57:07.896492   80062 kubeadm.go:318] 
	I1017 18:57:07.896574   80062 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1017 18:57:07.896644   80062 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1017 18:57:07.896689   80062 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1017 18:57:07.896697   80062 kubeadm.go:318] 
	I1017 18:57:07.896767   80062 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1017 18:57:07.896778   80062 kubeadm.go:318] 
	I1017 18:57:07.896827   80062 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1017 18:57:07.896837   80062 kubeadm.go:318] 
	I1017 18:57:07.896890   80062 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1017 18:57:07.896959   80062 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1017 18:57:07.897017   80062 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1017 18:57:07.897045   80062 kubeadm.go:318] 
	I1017 18:57:07.897212   80062 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1017 18:57:07.897326   80062 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1017 18:57:07.897343   80062 kubeadm.go:318] 
	I1017 18:57:07.897438   80062 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token pb8i6h.iob1foy87rfz9rk2 \
	I1017 18:57:07.897543   80062 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3b308ccc67c6912c3da08c8fd549b129f93fcc97f263e1539a82462d36f18fd2 \
	I1017 18:57:07.897602   80062 kubeadm.go:318] 	--control-plane 
	I1017 18:57:07.897613   80062 kubeadm.go:318] 
	I1017 18:57:07.897710   80062 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1017 18:57:07.897720   80062 kubeadm.go:318] 
	I1017 18:57:07.897793   80062 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token pb8i6h.iob1foy87rfz9rk2 \
	I1017 18:57:07.897944   80062 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3b308ccc67c6912c3da08c8fd549b129f93fcc97f263e1539a82462d36f18fd2 
	I1017 18:57:07.899472   80062 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1017 18:57:07.899570   80062 cni.go:84] Creating CNI manager for ""
	I1017 18:57:07.899595   80062 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1017 18:57:07.901240   80062 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1017 18:57:07.902398   80062 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1017 18:57:07.918028   80062 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1017 18:57:07.941949   80062 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1017 18:57:07.942063   80062 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:07.942090   80062 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-768633 minikube.k8s.io/updated_at=2025_10_17T18_57_07_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d minikube.k8s.io/name=addons-768633 minikube.k8s.io/primary=true
	I1017 18:57:08.105168   80062 ops.go:34] apiserver oom_adj: -16
	I1017 18:57:08.105345   80062 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:08.605463   80062 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:09.105833   80062 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:09.605767   80062 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:10.106245   80062 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:10.605946   80062 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:11.106359   80062 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:11.605518   80062 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:12.105581   80062 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:12.207775   80062 kubeadm.go:1113] duration metric: took 4.265783805s to wait for elevateKubeSystemPrivileges
	I1017 18:57:12.207846   80062 kubeadm.go:402] duration metric: took 16.556258469s to StartCluster
	I1017 18:57:12.207876   80062 settings.go:142] acquiring lock: {Name:mkda33fafc6cb583284a8333cb60efdc2a47f894 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:57:12.208043   80062 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-75534/kubeconfig
	I1017 18:57:12.208606   80062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-75534/kubeconfig: {Name:mkeb0035d9ef9d3dc893fc7f4a25aa46f7d51ce7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:57:12.208854   80062 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1017 18:57:12.208886   80062 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 18:57:12.208953   80062 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1017 18:57:12.209074   80062 addons.go:69] Setting yakd=true in profile "addons-768633"
	I1017 18:57:12.209097   80062 addons.go:238] Setting addon yakd=true in "addons-768633"
	I1017 18:57:12.209109   80062 addons.go:69] Setting inspektor-gadget=true in profile "addons-768633"
	I1017 18:57:12.209131   80062 host.go:66] Checking if "addons-768633" exists ...
	I1017 18:57:12.209127   80062 addons.go:69] Setting default-storageclass=true in profile "addons-768633"
	I1017 18:57:12.209142   80062 config.go:182] Loaded profile config "addons-768633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:57:12.209153   80062 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-768633"
	I1017 18:57:12.209158   80062 addons.go:69] Setting ingress=true in profile "addons-768633"
	I1017 18:57:12.209170   80062 addons.go:238] Setting addon ingress=true in "addons-768633"
	I1017 18:57:12.209149   80062 addons.go:69] Setting gcp-auth=true in profile "addons-768633"
	I1017 18:57:12.209210   80062 mustload.go:65] Loading cluster: addons-768633
	I1017 18:57:12.209219   80062 host.go:66] Checking if "addons-768633" exists ...
	I1017 18:57:12.209210   80062 addons.go:69] Setting registry-creds=true in profile "addons-768633"
	I1017 18:57:12.209236   80062 addons.go:238] Setting addon registry-creds=true in "addons-768633"
	I1017 18:57:12.209268   80062 host.go:66] Checking if "addons-768633" exists ...
	I1017 18:57:12.209138   80062 addons.go:238] Setting addon inspektor-gadget=true in "addons-768633"
	I1017 18:57:12.209299   80062 host.go:66] Checking if "addons-768633" exists ...
	I1017 18:57:12.209367   80062 config.go:182] Loaded profile config "addons-768633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:57:12.209658   80062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 18:57:12.209677   80062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 18:57:12.209694   80062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 18:57:12.209692   80062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 18:57:12.209714   80062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 18:57:12.209732   80062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 18:57:12.209776   80062 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-768633"
	I1017 18:57:12.209784   80062 addons.go:69] Setting metrics-server=true in profile "addons-768633"
	I1017 18:57:12.209788   80062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 18:57:12.209795   80062 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-768633"
	I1017 18:57:12.209798   80062 addons.go:238] Setting addon metrics-server=true in "addons-768633"
	I1017 18:57:12.209806   80062 addons.go:69] Setting cloud-spanner=true in profile "addons-768633"
	I1017 18:57:12.209816   80062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 18:57:12.209819   80062 addons.go:69] Setting registry=true in profile "addons-768633"
	I1017 18:57:12.209823   80062 addons.go:238] Setting addon cloud-spanner=true in "addons-768633"
	I1017 18:57:12.209810   80062 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-768633"
	I1017 18:57:12.209831   80062 addons.go:238] Setting addon registry=true in "addons-768633"
	I1017 18:57:12.209836   80062 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-768633"
	I1017 18:57:12.209846   80062 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-768633"
	I1017 18:57:12.209859   80062 addons.go:69] Setting storage-provisioner=true in profile "addons-768633"
	I1017 18:57:12.209869   80062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 18:57:12.209870   80062 addons.go:238] Setting addon storage-provisioner=true in "addons-768633"
	I1017 18:57:12.209878   80062 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-768633"
	I1017 18:57:12.209880   80062 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-768633"
	I1017 18:57:12.209885   80062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 18:57:12.209890   80062 addons.go:69] Setting volcano=true in profile "addons-768633"
	I1017 18:57:12.209891   80062 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-768633"
	I1017 18:57:12.209899   80062 addons.go:238] Setting addon volcano=true in "addons-768633"
	I1017 18:57:12.209902   80062 addons.go:69] Setting volumesnapshots=true in profile "addons-768633"
	I1017 18:57:12.209911   80062 addons.go:69] Setting ingress-dns=true in profile "addons-768633"
	I1017 18:57:12.209912   80062 addons.go:238] Setting addon volumesnapshots=true in "addons-768633"
	I1017 18:57:12.209921   80062 addons.go:238] Setting addon ingress-dns=true in "addons-768633"
	I1017 18:57:12.210063   80062 host.go:66] Checking if "addons-768633" exists ...
	I1017 18:57:12.210078   80062 host.go:66] Checking if "addons-768633" exists ...
	I1017 18:57:12.210111   80062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 18:57:12.210209   80062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 18:57:12.210308   80062 host.go:66] Checking if "addons-768633" exists ...
	I1017 18:57:12.210415   80062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 18:57:12.210441   80062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 18:57:12.210462   80062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 18:57:12.210513   80062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 18:57:12.210524   80062 host.go:66] Checking if "addons-768633" exists ...
	I1017 18:57:12.210591   80062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 18:57:12.210614   80062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 18:57:12.210680   80062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 18:57:12.210709   80062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 18:57:12.210997   80062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 18:57:12.211027   80062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 18:57:12.211904   80062 host.go:66] Checking if "addons-768633" exists ...
	I1017 18:57:12.212029   80062 host.go:66] Checking if "addons-768633" exists ...
	I1017 18:57:12.212266   80062 host.go:66] Checking if "addons-768633" exists ...
	I1017 18:57:12.212451   80062 host.go:66] Checking if "addons-768633" exists ...
	I1017 18:57:12.212587   80062 host.go:66] Checking if "addons-768633" exists ...
	I1017 18:57:12.212793   80062 host.go:66] Checking if "addons-768633" exists ...
	I1017 18:57:12.214740   80062 out.go:179] * Verifying Kubernetes components...
	I1017 18:57:12.216514   80062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 18:57:12.220174   80062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 18:57:12.220221   80062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 18:57:12.224047   80062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 18:57:12.224098   80062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 18:57:12.224248   80062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 18:57:12.224287   80062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 18:57:12.225071   80062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 18:57:12.225116   80062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 18:57:12.226164   80062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 18:57:12.226215   80062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 18:57:12.226920   80062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 18:57:12.226966   80062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 18:57:12.231181   80062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45705
	I1017 18:57:12.232068   80062 main.go:141] libmachine: () Calling .GetVersion
	I1017 18:57:12.232503   80062 main.go:141] libmachine: Using API Version  1
	I1017 18:57:12.232581   80062 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 18:57:12.232945   80062 main.go:141] libmachine: () Calling .GetMachineName
	I1017 18:57:12.233533   80062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 18:57:12.233589   80062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 18:57:12.238940   80062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34985
	I1017 18:57:12.243331   80062 main.go:141] libmachine: () Calling .GetVersion
	I1017 18:57:12.243884   80062 main.go:141] libmachine: Using API Version  1
	I1017 18:57:12.243909   80062 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 18:57:12.244322   80062 main.go:141] libmachine: () Calling .GetMachineName
	I1017 18:57:12.244531   80062 main.go:141] libmachine: (addons-768633) Calling .GetState
	I1017 18:57:12.250664   80062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32949
	I1017 18:57:12.250674   80062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35943
	I1017 18:57:12.250806   80062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34731
	I1017 18:57:12.251313   80062 main.go:141] libmachine: () Calling .GetVersion
	I1017 18:57:12.251510   80062 main.go:141] libmachine: () Calling .GetVersion
	I1017 18:57:12.252215   80062 main.go:141] libmachine: Using API Version  1
	I1017 18:57:12.252238   80062 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 18:57:12.252684   80062 main.go:141] libmachine: () Calling .GetVersion
	I1017 18:57:12.253180   80062 main.go:141] libmachine: Using API Version  1
	I1017 18:57:12.253199   80062 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 18:57:12.253331   80062 main.go:141] libmachine: Using API Version  1
	I1017 18:57:12.253342   80062 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 18:57:12.253693   80062 main.go:141] libmachine: () Calling .GetMachineName
	I1017 18:57:12.253749   80062 addons.go:238] Setting addon default-storageclass=true in "addons-768633"
	I1017 18:57:12.253821   80062 host.go:66] Checking if "addons-768633" exists ...
	I1017 18:57:12.253928   80062 main.go:141] libmachine: () Calling .GetMachineName
	I1017 18:57:12.254504   80062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 18:57:12.254543   80062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 18:57:12.254760   80062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 18:57:12.254798   80062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 18:57:12.255034   80062 main.go:141] libmachine: () Calling .GetMachineName
	I1017 18:57:12.255167   80062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44357
	I1017 18:57:12.255780   80062 main.go:141] libmachine: () Calling .GetVersion
	I1017 18:57:12.256310   80062 main.go:141] libmachine: Using API Version  1
	I1017 18:57:12.256352   80062 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 18:57:12.256740   80062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43361
	I1017 18:57:12.257381   80062 main.go:141] libmachine: () Calling .GetVersion
	I1017 18:57:12.257873   80062 main.go:141] libmachine: Using API Version  1
	I1017 18:57:12.257909   80062 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 18:57:12.258155   80062 main.go:141] libmachine: () Calling .GetMachineName
	I1017 18:57:12.258852   80062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 18:57:12.259036   80062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 18:57:12.259042   80062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 18:57:12.259075   80062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 18:57:12.259084   80062 main.go:141] libmachine: () Calling .GetMachineName
	I1017 18:57:12.259189   80062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 18:57:12.259689   80062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 18:57:12.259870   80062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 18:57:12.259900   80062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 18:57:12.268399   80062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43521
	I1017 18:57:12.269319   80062 main.go:141] libmachine: () Calling .GetVersion
	I1017 18:57:12.269975   80062 main.go:141] libmachine: Using API Version  1
	I1017 18:57:12.269998   80062 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 18:57:12.270474   80062 main.go:141] libmachine: () Calling .GetMachineName
	I1017 18:57:12.271002   80062 main.go:141] libmachine: (addons-768633) Calling .GetState
	I1017 18:57:12.271085   80062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38731
	I1017 18:57:12.271774   80062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42391
	I1017 18:57:12.275037   80062 host.go:66] Checking if "addons-768633" exists ...
	I1017 18:57:12.275467   80062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 18:57:12.275509   80062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 18:57:12.277688   80062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38329
	I1017 18:57:12.277705   80062 main.go:141] libmachine: () Calling .GetVersion
	I1017 18:57:12.278038   80062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41593
	I1017 18:57:12.278310   80062 main.go:141] libmachine: Using API Version  1
	I1017 18:57:12.278324   80062 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 18:57:12.278684   80062 main.go:141] libmachine: () Calling .GetVersion
	I1017 18:57:12.279155   80062 main.go:141] libmachine: Using API Version  1
	I1017 18:57:12.279169   80062 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 18:57:12.279847   80062 main.go:141] libmachine: () Calling .GetMachineName
	I1017 18:57:12.280437   80062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 18:57:12.280474   80062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 18:57:12.283939   80062 main.go:141] libmachine: () Calling .GetMachineName
	I1017 18:57:12.284210   80062 main.go:141] libmachine: () Calling .GetVersion
	I1017 18:57:12.284692   80062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 18:57:12.284734   80062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 18:57:12.285037   80062 main.go:141] libmachine: Using API Version  1
	I1017 18:57:12.285055   80062 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 18:57:12.285129   80062 main.go:141] libmachine: () Calling .GetVersion
	I1017 18:57:12.285436   80062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40427
	I1017 18:57:12.285814   80062 main.go:141] libmachine: Using API Version  1
	I1017 18:57:12.285828   80062 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 18:57:12.286285   80062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40681
	I1017 18:57:12.286394   80062 main.go:141] libmachine: () Calling .GetMachineName
	I1017 18:57:12.287035   80062 main.go:141] libmachine: (addons-768633) Calling .GetState
	I1017 18:57:12.287058   80062 main.go:141] libmachine: () Calling .GetVersion
	I1017 18:57:12.287474   80062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36551
	I1017 18:57:12.287707   80062 main.go:141] libmachine: Using API Version  1
	I1017 18:57:12.287731   80062 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 18:57:12.287832   80062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38661
	I1017 18:57:12.287891   80062 main.go:141] libmachine: () Calling .GetVersion
	I1017 18:57:12.288363   80062 main.go:141] libmachine: () Calling .GetVersion
	I1017 18:57:12.288421   80062 main.go:141] libmachine: Using API Version  1
	I1017 18:57:12.288437   80062 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 18:57:12.288825   80062 main.go:141] libmachine: () Calling .GetMachineName
	I1017 18:57:12.288991   80062 main.go:141] libmachine: Using API Version  1
	I1017 18:57:12.289004   80062 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 18:57:12.289382   80062 main.go:141] libmachine: () Calling .GetMachineName
	I1017 18:57:12.289642   80062 main.go:141] libmachine: (addons-768633) Calling .GetState
	I1017 18:57:12.289725   80062 main.go:141] libmachine: () Calling .GetMachineName
	I1017 18:57:12.290020   80062 main.go:141] libmachine: (addons-768633) Calling .GetState
	I1017 18:57:12.290495   80062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41531
	I1017 18:57:12.290601   80062 main.go:141] libmachine: (addons-768633) Calling .GetState
	I1017 18:57:12.291119   80062 main.go:141] libmachine: () Calling .GetMachineName
	I1017 18:57:12.291823   80062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 18:57:12.291860   80062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 18:57:12.292195   80062 main.go:141] libmachine: () Calling .GetVersion
	I1017 18:57:12.294976   80062 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-768633"
	I1017 18:57:12.295126   80062 host.go:66] Checking if "addons-768633" exists ...
	I1017 18:57:12.296249   80062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 18:57:12.296282   80062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 18:57:12.297898   80062 main.go:141] libmachine: (addons-768633) Calling .DriverName
	I1017 18:57:12.297983   80062 main.go:141] libmachine: (addons-768633) Calling .DriverName
	I1017 18:57:12.298091   80062 main.go:141] libmachine: (addons-768633) Calling .DriverName
	I1017 18:57:12.298514   80062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35431
	I1017 18:57:12.298857   80062 main.go:141] libmachine: Using API Version  1
	I1017 18:57:12.299312   80062 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 18:57:12.299732   80062 main.go:141] libmachine: () Calling .GetVersion
	I1017 18:57:12.300244   80062 main.go:141] libmachine: Using API Version  1
	I1017 18:57:12.300264   80062 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 18:57:12.300670   80062 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1017 18:57:12.300730   80062 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 18:57:12.300766   80062 main.go:141] libmachine: () Calling .GetMachineName
	I1017 18:57:12.300796   80062 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1017 18:57:12.301041   80062 main.go:141] libmachine: (addons-768633) Calling .GetState
	I1017 18:57:12.302025   80062 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 18:57:12.302296   80062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 18:57:12.302320   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHHostname
	I1017 18:57:12.302657   80062 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1017 18:57:12.302674   80062 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1017 18:57:12.302692   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHHostname
	I1017 18:57:12.303289   80062 main.go:141] libmachine: () Calling .GetMachineName
	I1017 18:57:12.303515   80062 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1017 18:57:12.303708   80062 main.go:141] libmachine: (addons-768633) Calling .GetState
	I1017 18:57:12.305650   80062 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1017 18:57:12.305822   80062 main.go:141] libmachine: () Calling .GetVersion
	I1017 18:57:12.306599   80062 main.go:141] libmachine: (addons-768633) Calling .DriverName
	I1017 18:57:12.306864   80062 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1017 18:57:12.306880   80062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1017 18:57:12.306900   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHHostname
	I1017 18:57:12.307271   80062 main.go:141] libmachine: (addons-768633) Calling .DriverName
	I1017 18:57:12.307619   80062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45855
	I1017 18:57:12.308258   80062 main.go:141] libmachine: Using API Version  1
	I1017 18:57:12.308371   80062 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 18:57:12.308573   80062 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1017 18:57:12.308861   80062 main.go:141] libmachine: () Calling .GetVersion
	I1017 18:57:12.309635   80062 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1017 18:57:12.309814   80062 main.go:141] libmachine: Using API Version  1
	I1017 18:57:12.309828   80062 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 18:57:12.309702   80062 main.go:141] libmachine: () Calling .GetMachineName
	I1017 18:57:12.311047   80062 main.go:141] libmachine: () Calling .GetMachineName
	I1017 18:57:12.311283   80062 main.go:141] libmachine: (addons-768633) Calling .GetState
	I1017 18:57:12.311411   80062 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1017 18:57:12.311440   80062 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1017 18:57:12.311462   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHHostname
	I1017 18:57:12.312093   80062 out.go:179]   - Using image docker.io/registry:3.0.0
	I1017 18:57:12.313460   80062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 18:57:12.313601   80062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 18:57:12.314001   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:57:12.314084   80062 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1017 18:57:12.314098   80062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1017 18:57:12.314200   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHHostname
	I1017 18:57:12.315741   80062 main.go:141] libmachine: (addons-768633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:10:49", ip: ""} in network mk-addons-768633: {Iface:virbr1 ExpiryTime:2025-10-17 19:56:44 +0000 UTC Type:0 Mac:52:54:00:2d:10:49 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-768633 Clientid:01:52:54:00:2d:10:49}
	I1017 18:57:12.315776   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined IP address 192.168.39.150 and MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:57:12.316056   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHPort
	I1017 18:57:12.316259   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHKeyPath
	I1017 18:57:12.316439   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:57:12.316508   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHUsername
	I1017 18:57:12.316562   80062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41695
	I1017 18:57:12.316872   80062 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/addons-768633/id_rsa Username:docker}
	I1017 18:57:12.317122   80062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39111
	I1017 18:57:12.317329   80062 main.go:141] libmachine: (addons-768633) Calling .DriverName
	I1017 18:57:12.317387   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:57:12.318331   80062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41101
	I1017 18:57:12.318809   80062 main.go:141] libmachine: (addons-768633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:10:49", ip: ""} in network mk-addons-768633: {Iface:virbr1 ExpiryTime:2025-10-17 19:56:44 +0000 UTC Type:0 Mac:52:54:00:2d:10:49 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-768633 Clientid:01:52:54:00:2d:10:49}
	I1017 18:57:12.318922   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined IP address 192.168.39.150 and MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:57:12.319309   80062 main.go:141] libmachine: () Calling .GetVersion
	I1017 18:57:12.319698   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHPort
	I1017 18:57:12.319779   80062 main.go:141] libmachine: () Calling .GetVersion
	I1017 18:57:12.320645   80062 main.go:141] libmachine: Using API Version  1
	I1017 18:57:12.320674   80062 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 18:57:12.320738   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHKeyPath
	I1017 18:57:12.320884   80062 main.go:141] libmachine: (addons-768633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:10:49", ip: ""} in network mk-addons-768633: {Iface:virbr1 ExpiryTime:2025-10-17 19:56:44 +0000 UTC Type:0 Mac:52:54:00:2d:10:49 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-768633 Clientid:01:52:54:00:2d:10:49}
	I1017 18:57:12.320921   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined IP address 192.168.39.150 and MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:57:12.321824   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHPort
	I1017 18:57:12.322116   80062 main.go:141] libmachine: Using API Version  1
	I1017 18:57:12.322132   80062 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 18:57:12.322219   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:57:12.322299   80062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38333
	I1017 18:57:12.322954   80062 main.go:141] libmachine: () Calling .GetMachineName
	I1017 18:57:12.323011   80062 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1017 18:57:12.323042   80062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41431
	I1017 18:57:12.323418   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHKeyPath
	I1017 18:57:12.323735   80062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 18:57:12.323749   80062 main.go:141] libmachine: () Calling .GetVersion
	I1017 18:57:12.323773   80062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 18:57:12.324373   80062 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1017 18:57:12.324388   80062 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1017 18:57:12.324408   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHHostname
	I1017 18:57:12.324489   80062 main.go:141] libmachine: (addons-768633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:10:49", ip: ""} in network mk-addons-768633: {Iface:virbr1 ExpiryTime:2025-10-17 19:56:44 +0000 UTC Type:0 Mac:52:54:00:2d:10:49 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-768633 Clientid:01:52:54:00:2d:10:49}
	I1017 18:57:12.324523   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined IP address 192.168.39.150 and MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:57:12.324541   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHUsername
	I1017 18:57:12.324855   80062 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/addons-768633/id_rsa Username:docker}
	I1017 18:57:12.325303   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHUsername
	I1017 18:57:12.325350   80062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34661
	I1017 18:57:12.325520   80062 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/addons-768633/id_rsa Username:docker}
	I1017 18:57:12.326259   80062 main.go:141] libmachine: Using API Version  1
	I1017 18:57:12.326296   80062 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 18:57:12.326642   80062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35267
	I1017 18:57:12.326699   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHPort
	I1017 18:57:12.326770   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:57:12.326770   80062 main.go:141] libmachine: () Calling .GetMachineName
	I1017 18:57:12.326804   80062 main.go:141] libmachine: () Calling .GetVersion
	I1017 18:57:12.326977   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHKeyPath
	I1017 18:57:12.327071   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHUsername
	I1017 18:57:12.327299   80062 main.go:141] libmachine: Using API Version  1
	I1017 18:57:12.327317   80062 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 18:57:12.327443   80062 main.go:141] libmachine: () Calling .GetVersion
	I1017 18:57:12.327598   80062 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/addons-768633/id_rsa Username:docker}
	I1017 18:57:12.328376   80062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37959
	I1017 18:57:12.328390   80062 main.go:141] libmachine: () Calling .GetMachineName
	I1017 18:57:12.328444   80062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 18:57:12.328471   80062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 18:57:12.328523   80062 main.go:141] libmachine: Using API Version  1
	I1017 18:57:12.328377   80062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37609
	I1017 18:57:12.328671   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHPort
	I1017 18:57:12.328537   80062 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 18:57:12.328925   80062 main.go:141] libmachine: () Calling .GetVersion
	I1017 18:57:12.329267   80062 main.go:141] libmachine: (addons-768633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:10:49", ip: ""} in network mk-addons-768633: {Iface:virbr1 ExpiryTime:2025-10-17 19:56:44 +0000 UTC Type:0 Mac:52:54:00:2d:10:49 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-768633 Clientid:01:52:54:00:2d:10:49}
	I1017 18:57:12.329297   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined IP address 192.168.39.150 and MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:57:12.329825   80062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 18:57:12.329858   80062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 18:57:12.329871   80062 main.go:141] libmachine: () Calling .GetVersion
	I1017 18:57:12.329901   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHKeyPath
	I1017 18:57:12.329961   80062 main.go:141] libmachine: () Calling .GetMachineName
	I1017 18:57:12.329977   80062 main.go:141] libmachine: () Calling .GetVersion
	I1017 18:57:12.330011   80062 main.go:141] libmachine: () Calling .GetMachineName
	I1017 18:57:12.330159   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHUsername
	I1017 18:57:12.330346   80062 main.go:141] libmachine: Using API Version  1
	I1017 18:57:12.330367   80062 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 18:57:12.330440   80062 main.go:141] libmachine: (addons-768633) Calling .GetState
	I1017 18:57:12.330596   80062 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/addons-768633/id_rsa Username:docker}
	I1017 18:57:12.331002   80062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 18:57:12.331051   80062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 18:57:12.331101   80062 main.go:141] libmachine: () Calling .GetVersion
	I1017 18:57:12.331265   80062 main.go:141] libmachine: Using API Version  1
	I1017 18:57:12.331278   80062 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 18:57:12.331710   80062 main.go:141] libmachine: Using API Version  1
	I1017 18:57:12.331785   80062 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 18:57:12.331743   80062 main.go:141] libmachine: () Calling .GetMachineName
	I1017 18:57:12.332356   80062 main.go:141] libmachine: () Calling .GetMachineName
	I1017 18:57:12.332424   80062 main.go:141] libmachine: () Calling .GetMachineName
	I1017 18:57:12.332607   80062 main.go:141] libmachine: (addons-768633) Calling .GetState
	I1017 18:57:12.332993   80062 main.go:141] libmachine: Using API Version  1
	I1017 18:57:12.333017   80062 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 18:57:12.333208   80062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 18:57:12.333296   80062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 18:57:12.333773   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:57:12.333789   80062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42483
	I1017 18:57:12.333901   80062 main.go:141] libmachine: () Calling .GetMachineName
	I1017 18:57:12.334611   80062 main.go:141] libmachine: (addons-768633) Calling .DriverName
	I1017 18:57:12.334659   80062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 18:57:12.334749   80062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 18:57:12.338959   80062 main.go:141] libmachine: () Calling .GetVersion
	I1017 18:57:12.339117   80062 main.go:141] libmachine: (addons-768633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:10:49", ip: ""} in network mk-addons-768633: {Iface:virbr1 ExpiryTime:2025-10-17 19:56:44 +0000 UTC Type:0 Mac:52:54:00:2d:10:49 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-768633 Clientid:01:52:54:00:2d:10:49}
	I1017 18:57:12.339153   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined IP address 192.168.39.150 and MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:57:12.339211   80062 main.go:141] libmachine: (addons-768633) Calling .DriverName
	I1017 18:57:12.339298   80062 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 18:57:12.339310   80062 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 18:57:12.339327   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHHostname
	I1017 18:57:12.339384   80062 main.go:141] libmachine: (addons-768633) Calling .DriverName
	I1017 18:57:12.339981   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHPort
	I1017 18:57:12.340094   80062 main.go:141] libmachine: Using API Version  1
	I1017 18:57:12.340114   80062 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 18:57:12.340404   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHKeyPath
	I1017 18:57:12.340589   80062 main.go:141] libmachine: () Calling .GetMachineName
	I1017 18:57:12.340779   80062 main.go:141] libmachine: (addons-768633) Calling .GetState
	I1017 18:57:12.342157   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHUsername
	I1017 18:57:12.342464   80062 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/addons-768633/id_rsa Username:docker}
	I1017 18:57:12.343008   80062 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1017 18:57:12.344743   80062 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1017 18:57:12.344812   80062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1017 18:57:12.344836   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHHostname
	I1017 18:57:12.349737   80062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40385
	I1017 18:57:12.349753   80062 main.go:141] libmachine: (addons-768633) Calling .DriverName
	I1017 18:57:12.350713   80062 main.go:141] libmachine: () Calling .GetVersion
	I1017 18:57:12.351341   80062 main.go:141] libmachine: Using API Version  1
	I1017 18:57:12.351408   80062 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 18:57:12.351993   80062 main.go:141] libmachine: () Calling .GetMachineName
	I1017 18:57:12.352116   80062 main.go:141] libmachine: (addons-768633) Calling .GetState
	I1017 18:57:12.352488   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:57:12.354310   80062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46203
	I1017 18:57:12.354593   80062 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1017 18:57:12.354947   80062 main.go:141] libmachine: () Calling .GetVersion
	I1017 18:57:12.355462   80062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33509
	I1017 18:57:12.355597   80062 main.go:141] libmachine: Using API Version  1
	I1017 18:57:12.355615   80062 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 18:57:12.356178   80062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43341
	I1017 18:57:12.356608   80062 main.go:141] libmachine: () Calling .GetMachineName
	I1017 18:57:12.356664   80062 main.go:141] libmachine: (addons-768633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:10:49", ip: ""} in network mk-addons-768633: {Iface:virbr1 ExpiryTime:2025-10-17 19:56:44 +0000 UTC Type:0 Mac:52:54:00:2d:10:49 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-768633 Clientid:01:52:54:00:2d:10:49}
	I1017 18:57:12.356688   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined IP address 192.168.39.150 and MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:57:12.356856   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHPort
	I1017 18:57:12.356912   80062 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1017 18:57:12.356925   80062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1017 18:57:12.356954   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHHostname
	I1017 18:57:12.357614   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHKeyPath
	I1017 18:57:12.357622   80062 main.go:141] libmachine: () Calling .GetVersion
	I1017 18:57:12.357626   80062 main.go:141] libmachine: () Calling .GetVersion
	I1017 18:57:12.357682   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:57:12.357712   80062 main.go:141] libmachine: (addons-768633) Calling .GetState
	I1017 18:57:12.357847   80062 main.go:141] libmachine: (addons-768633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:10:49", ip: ""} in network mk-addons-768633: {Iface:virbr1 ExpiryTime:2025-10-17 19:56:44 +0000 UTC Type:0 Mac:52:54:00:2d:10:49 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-768633 Clientid:01:52:54:00:2d:10:49}
	I1017 18:57:12.357883   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHUsername
	I1017 18:57:12.358103   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined IP address 192.168.39.150 and MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:57:12.358415   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHPort
	I1017 18:57:12.358514   80062 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/addons-768633/id_rsa Username:docker}
	I1017 18:57:12.358984   80062 main.go:141] libmachine: (addons-768633) Calling .DriverName
	I1017 18:57:12.359025   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHKeyPath
	I1017 18:57:12.359107   80062 main.go:141] libmachine: Using API Version  1
	I1017 18:57:12.359124   80062 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 18:57:12.359254   80062 main.go:141] libmachine: Using API Version  1
	I1017 18:57:12.359271   80062 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 18:57:12.359299   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHUsername
	I1017 18:57:12.359493   80062 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/addons-768633/id_rsa Username:docker}
	I1017 18:57:12.359886   80062 main.go:141] libmachine: () Calling .GetMachineName
	I1017 18:57:12.361962   80062 main.go:141] libmachine: (addons-768633) Calling .DriverName
	I1017 18:57:12.360071   80062 main.go:141] libmachine: () Calling .GetMachineName
	I1017 18:57:12.362649   80062 main.go:141] libmachine: (addons-768633) Calling .GetState
	I1017 18:57:12.362654   80062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44613
	I1017 18:57:12.362717   80062 main.go:141] libmachine: (addons-768633) Calling .GetState
	I1017 18:57:12.363037   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:57:12.363364   80062 main.go:141] libmachine: () Calling .GetVersion
	I1017 18:57:12.363680   80062 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1017 18:57:12.363940   80062 main.go:141] libmachine: (addons-768633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:10:49", ip: ""} in network mk-addons-768633: {Iface:virbr1 ExpiryTime:2025-10-17 19:56:44 +0000 UTC Type:0 Mac:52:54:00:2d:10:49 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-768633 Clientid:01:52:54:00:2d:10:49}
	I1017 18:57:12.364011   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined IP address 192.168.39.150 and MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:57:12.364143   80062 main.go:141] libmachine: Using API Version  1
	I1017 18:57:12.364167   80062 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 18:57:12.364215   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHPort
	I1017 18:57:12.364265   80062 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1017 18:57:12.364404   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHKeyPath
	I1017 18:57:12.364602   80062 main.go:141] libmachine: () Calling .GetMachineName
	I1017 18:57:12.364916   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHUsername
	I1017 18:57:12.364984   80062 main.go:141] libmachine: (addons-768633) Calling .GetState
	I1017 18:57:12.365259   80062 main.go:141] libmachine: (addons-768633) Calling .DriverName
	I1017 18:57:12.365132   80062 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/addons-768633/id_rsa Username:docker}
	I1017 18:57:12.365808   80062 main.go:141] libmachine: (addons-768633) Calling .DriverName
	I1017 18:57:12.366081   80062 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1017 18:57:12.366136   80062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1017 18:57:12.366169   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHHostname
	I1017 18:57:12.367758   80062 main.go:141] libmachine: (addons-768633) Calling .DriverName
	I1017 18:57:12.368105   80062 main.go:141] libmachine: Making call to close driver server
	I1017 18:57:12.368158   80062 main.go:141] libmachine: (addons-768633) Calling .Close
	I1017 18:57:12.368545   80062 main.go:141] libmachine: Successfully made call to close driver server
	I1017 18:57:12.368593   80062 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 18:57:12.368605   80062 main.go:141] libmachine: Making call to close driver server
	I1017 18:57:12.368616   80062 main.go:141] libmachine: (addons-768633) Calling .Close
	I1017 18:57:12.368940   80062 main.go:141] libmachine: Successfully made call to close driver server
	I1017 18:57:12.369099   80062 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 18:57:12.369232   80062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44307
	W1017 18:57:12.369270   80062 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1017 18:57:12.369864   80062 main.go:141] libmachine: () Calling .GetVersion
	I1017 18:57:12.370044   80062 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1017 18:57:12.370368   80062 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1017 18:57:12.370424   80062 main.go:141] libmachine: Using API Version  1
	I1017 18:57:12.370460   80062 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1017 18:57:12.370902   80062 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 18:57:12.370635   80062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35445
	I1017 18:57:12.371265   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:57:12.371322   80062 main.go:141] libmachine: () Calling .GetMachineName
	I1017 18:57:12.371506   80062 main.go:141] libmachine: () Calling .GetVersion
	I1017 18:57:12.371683   80062 main.go:141] libmachine: (addons-768633) Calling .GetState
	I1017 18:57:12.371795   80062 main.go:141] libmachine: (addons-768633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:10:49", ip: ""} in network mk-addons-768633: {Iface:virbr1 ExpiryTime:2025-10-17 19:56:44 +0000 UTC Type:0 Mac:52:54:00:2d:10:49 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-768633 Clientid:01:52:54:00:2d:10:49}
	I1017 18:57:12.371819   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined IP address 192.168.39.150 and MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:57:12.372007   80062 main.go:141] libmachine: Using API Version  1
	I1017 18:57:12.372026   80062 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 18:57:12.372217   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHPort
	I1017 18:57:12.372438   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHKeyPath
	I1017 18:57:12.372494   80062 main.go:141] libmachine: () Calling .GetMachineName
	I1017 18:57:12.372613   80062 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1017 18:57:12.372661   80062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1017 18:57:12.372678   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHHostname
	I1017 18:57:12.372678   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHUsername
	I1017 18:57:12.372726   80062 main.go:141] libmachine: (addons-768633) Calling .GetState
	I1017 18:57:12.372848   80062 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/addons-768633/id_rsa Username:docker}
	I1017 18:57:12.373897   80062 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1017 18:57:12.373931   80062 out.go:179]   - Using image docker.io/busybox:stable
	I1017 18:57:12.375133   80062 main.go:141] libmachine: (addons-768633) Calling .DriverName
	I1017 18:57:12.375382   80062 main.go:141] libmachine: (addons-768633) Calling .DriverName
	I1017 18:57:12.375485   80062 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1017 18:57:12.375614   80062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1017 18:57:12.375636   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHHostname
	I1017 18:57:12.376834   80062 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1017 18:57:12.376844   80062 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1017 18:57:12.376862   80062 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1017 18:57:12.377405   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:57:12.378123   80062 main.go:141] libmachine: (addons-768633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:10:49", ip: ""} in network mk-addons-768633: {Iface:virbr1 ExpiryTime:2025-10-17 19:56:44 +0000 UTC Type:0 Mac:52:54:00:2d:10:49 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-768633 Clientid:01:52:54:00:2d:10:49}
	I1017 18:57:12.378152   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined IP address 192.168.39.150 and MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:57:12.378390   80062 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1017 18:57:12.378434   80062 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1017 18:57:12.378438   80062 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1017 18:57:12.378443   80062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1017 18:57:12.378457   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHHostname
	I1017 18:57:12.378463   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHHostname
	I1017 18:57:12.378495   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHPort
	I1017 18:57:12.378739   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHKeyPath
	I1017 18:57:12.378928   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHUsername
	I1017 18:57:12.379137   80062 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/addons-768633/id_rsa Username:docker}
	I1017 18:57:12.379711   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:57:12.380023   80062 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1017 18:57:12.380417   80062 main.go:141] libmachine: (addons-768633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:10:49", ip: ""} in network mk-addons-768633: {Iface:virbr1 ExpiryTime:2025-10-17 19:56:44 +0000 UTC Type:0 Mac:52:54:00:2d:10:49 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-768633 Clientid:01:52:54:00:2d:10:49}
	I1017 18:57:12.380447   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined IP address 192.168.39.150 and MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:57:12.380650   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHPort
	I1017 18:57:12.380836   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHKeyPath
	I1017 18:57:12.381005   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHUsername
	I1017 18:57:12.381181   80062 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/addons-768633/id_rsa Username:docker}
	I1017 18:57:12.382814   80062 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1017 18:57:12.383259   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:57:12.383744   80062 main.go:141] libmachine: (addons-768633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:10:49", ip: ""} in network mk-addons-768633: {Iface:virbr1 ExpiryTime:2025-10-17 19:56:44 +0000 UTC Type:0 Mac:52:54:00:2d:10:49 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-768633 Clientid:01:52:54:00:2d:10:49}
	I1017 18:57:12.383791   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:57:12.383809   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined IP address 192.168.39.150 and MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:57:12.383991   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHPort
	I1017 18:57:12.384243   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHKeyPath
	I1017 18:57:12.384439   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHUsername
	I1017 18:57:12.384603   80062 main.go:141] libmachine: (addons-768633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:10:49", ip: ""} in network mk-addons-768633: {Iface:virbr1 ExpiryTime:2025-10-17 19:56:44 +0000 UTC Type:0 Mac:52:54:00:2d:10:49 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-768633 Clientid:01:52:54:00:2d:10:49}
	I1017 18:57:12.384625   80062 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/addons-768633/id_rsa Username:docker}
	I1017 18:57:12.384636   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined IP address 192.168.39.150 and MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:57:12.384851   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHPort
	I1017 18:57:12.385036   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHKeyPath
	I1017 18:57:12.385170   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHUsername
	I1017 18:57:12.385319   80062 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/addons-768633/id_rsa Username:docker}
	I1017 18:57:12.385413   80062 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1017 18:57:12.386819   80062 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1017 18:57:12.388133   80062 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1017 18:57:12.388155   80062 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1017 18:57:12.388180   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHHostname
	I1017 18:57:12.392522   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:57:12.393081   80062 main.go:141] libmachine: (addons-768633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:10:49", ip: ""} in network mk-addons-768633: {Iface:virbr1 ExpiryTime:2025-10-17 19:56:44 +0000 UTC Type:0 Mac:52:54:00:2d:10:49 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-768633 Clientid:01:52:54:00:2d:10:49}
	I1017 18:57:12.393108   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined IP address 192.168.39.150 and MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:57:12.393347   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHPort
	I1017 18:57:12.393547   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHKeyPath
	I1017 18:57:12.393752   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHUsername
	I1017 18:57:12.393918   80062 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/addons-768633/id_rsa Username:docker}
	I1017 18:57:12.928252   80062 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1017 18:57:12.928283   80062 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1017 18:57:13.121640   80062 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 18:57:13.121670   80062 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1017 18:57:13.302292   80062 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1017 18:57:13.302326   80062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1017 18:57:13.309943   80062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1017 18:57:13.330788   80062 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:13.330822   80062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1017 18:57:13.337583   80062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1017 18:57:13.394119   80062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1017 18:57:13.417020   80062 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1017 18:57:13.417048   80062 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1017 18:57:13.430230   80062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1017 18:57:13.446320   80062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 18:57:13.449907   80062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1017 18:57:13.455753   80062 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1017 18:57:13.455785   80062 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1017 18:57:13.477779   80062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1017 18:57:13.494997   80062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 18:57:13.616473   80062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1017 18:57:13.748383   80062 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1017 18:57:13.748418   80062 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1017 18:57:13.791333   80062 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1017 18:57:13.791368   80062 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1017 18:57:13.917854   80062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:13.989268   80062 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1017 18:57:13.989307   80062 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1017 18:57:14.131476   80062 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1017 18:57:14.131529   80062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1017 18:57:14.144260   80062 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1017 18:57:14.144292   80062 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1017 18:57:14.460853   80062 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1017 18:57:14.460884   80062 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1017 18:57:14.597363   80062 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1017 18:57:14.597398   80062 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1017 18:57:14.663080   80062 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1017 18:57:14.663111   80062 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1017 18:57:14.735132   80062 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1017 18:57:14.735171   80062 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1017 18:57:14.775854   80062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1017 18:57:14.929160   80062 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1017 18:57:14.929204   80062 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1017 18:57:15.007371   80062 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1017 18:57:15.007407   80062 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1017 18:57:15.137489   80062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1017 18:57:15.161452   80062 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1017 18:57:15.161485   80062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1017 18:57:15.330645   80062 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1017 18:57:15.330676   80062 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1017 18:57:15.399806   80062 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1017 18:57:15.399832   80062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1017 18:57:15.437369   80062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1017 18:57:15.699090   80062 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1017 18:57:15.699126   80062 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1017 18:57:15.907018   80062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1017 18:57:16.323469   80062 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1017 18:57:16.323500   80062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1017 18:57:16.732528   80062 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.610819102s)
	I1017 18:57:16.732597   80062 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1017 18:57:16.732607   80062 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.610923081s)
	I1017 18:57:16.732651   80062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.422668714s)
	I1017 18:57:16.732697   80062 main.go:141] libmachine: Making call to close driver server
	I1017 18:57:16.732712   80062 main.go:141] libmachine: (addons-768633) Calling .Close
	I1017 18:57:16.732718   80062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.395097923s)
	I1017 18:57:16.732751   80062 main.go:141] libmachine: Making call to close driver server
	I1017 18:57:16.732767   80062 main.go:141] libmachine: (addons-768633) Calling .Close
	I1017 18:57:16.732826   80062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.338676523s)
	I1017 18:57:16.732847   80062 main.go:141] libmachine: Making call to close driver server
	I1017 18:57:16.732857   80062 main.go:141] libmachine: (addons-768633) Calling .Close
	I1017 18:57:16.733200   80062 main.go:141] libmachine: (addons-768633) DBG | Closing plugin on server side
	I1017 18:57:16.733261   80062 main.go:141] libmachine: Successfully made call to close driver server
	I1017 18:57:16.733270   80062 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 18:57:16.733281   80062 main.go:141] libmachine: Making call to close driver server
	I1017 18:57:16.733291   80062 main.go:141] libmachine: (addons-768633) Calling .Close
	I1017 18:57:16.733298   80062 main.go:141] libmachine: Successfully made call to close driver server
	I1017 18:57:16.733310   80062 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 18:57:16.733318   80062 main.go:141] libmachine: Making call to close driver server
	I1017 18:57:16.733325   80062 main.go:141] libmachine: (addons-768633) Calling .Close
	I1017 18:57:16.733381   80062 main.go:141] libmachine: (addons-768633) DBG | Closing plugin on server side
	I1017 18:57:16.733529   80062 main.go:141] libmachine: Successfully made call to close driver server
	I1017 18:57:16.733565   80062 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 18:57:16.733578   80062 main.go:141] libmachine: Making call to close driver server
	I1017 18:57:16.733578   80062 main.go:141] libmachine: (addons-768633) DBG | Closing plugin on server side
	I1017 18:57:16.733588   80062 main.go:141] libmachine: (addons-768633) Calling .Close
	I1017 18:57:16.733613   80062 main.go:141] libmachine: Successfully made call to close driver server
	I1017 18:57:16.733621   80062 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 18:57:16.733638   80062 node_ready.go:35] waiting up to 6m0s for node "addons-768633" to be "Ready" ...
	I1017 18:57:16.733823   80062 main.go:141] libmachine: (addons-768633) DBG | Closing plugin on server side
	I1017 18:57:16.733867   80062 main.go:141] libmachine: (addons-768633) DBG | Closing plugin on server side
	I1017 18:57:16.735503   80062 main.go:141] libmachine: Successfully made call to close driver server
	I1017 18:57:16.735520   80062 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 18:57:16.735662   80062 main.go:141] libmachine: Successfully made call to close driver server
	I1017 18:57:16.735679   80062 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 18:57:16.741107   80062 node_ready.go:49] node "addons-768633" is "Ready"
	I1017 18:57:16.741131   80062 node_ready.go:38] duration metric: took 7.471539ms for node "addons-768633" to be "Ready" ...
	I1017 18:57:16.741147   80062 api_server.go:52] waiting for apiserver process to appear ...
	I1017 18:57:16.741189   80062 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 18:57:16.961194   80062 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1017 18:57:16.961227   80062 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1017 18:57:17.288395   80062 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-768633" context rescaled to 1 replicas
	I1017 18:57:17.783662   80062 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1017 18:57:17.783695   80062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1017 18:57:18.370082   80062 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1017 18:57:18.370184   80062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1017 18:57:18.761500   80062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.331231011s)
	I1017 18:57:18.761572   80062 main.go:141] libmachine: Making call to close driver server
	I1017 18:57:18.761585   80062 main.go:141] libmachine: (addons-768633) Calling .Close
	I1017 18:57:18.761588   80062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.315226785s)
	I1017 18:57:18.761641   80062 main.go:141] libmachine: Making call to close driver server
	I1017 18:57:18.761655   80062 main.go:141] libmachine: (addons-768633) Calling .Close
	I1017 18:57:18.761879   80062 main.go:141] libmachine: (addons-768633) DBG | Closing plugin on server side
	I1017 18:57:18.761987   80062 main.go:141] libmachine: Successfully made call to close driver server
	I1017 18:57:18.762009   80062 main.go:141] libmachine: (addons-768633) DBG | Closing plugin on server side
	I1017 18:57:18.762012   80062 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 18:57:18.762017   80062 main.go:141] libmachine: Successfully made call to close driver server
	I1017 18:57:18.762024   80062 main.go:141] libmachine: Making call to close driver server
	I1017 18:57:18.762032   80062 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 18:57:18.762034   80062 main.go:141] libmachine: (addons-768633) Calling .Close
	I1017 18:57:18.762041   80062 main.go:141] libmachine: Making call to close driver server
	I1017 18:57:18.762049   80062 main.go:141] libmachine: (addons-768633) Calling .Close
	I1017 18:57:18.762274   80062 main.go:141] libmachine: Successfully made call to close driver server
	I1017 18:57:18.762290   80062 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 18:57:18.762353   80062 main.go:141] libmachine: (addons-768633) DBG | Closing plugin on server side
	I1017 18:57:18.762396   80062 main.go:141] libmachine: Successfully made call to close driver server
	I1017 18:57:18.762441   80062 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 18:57:18.780229   80062 main.go:141] libmachine: Making call to close driver server
	I1017 18:57:18.780257   80062 main.go:141] libmachine: (addons-768633) Calling .Close
	I1017 18:57:18.780577   80062 main.go:141] libmachine: Successfully made call to close driver server
	I1017 18:57:18.780592   80062 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 18:57:18.955392   80062 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1017 18:57:18.955419   80062 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1017 18:57:19.424514   80062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1017 18:57:19.756412   80062 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1017 18:57:19.756451   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHHostname
	I1017 18:57:19.760368   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:57:19.760867   80062 main.go:141] libmachine: (addons-768633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:10:49", ip: ""} in network mk-addons-768633: {Iface:virbr1 ExpiryTime:2025-10-17 19:56:44 +0000 UTC Type:0 Mac:52:54:00:2d:10:49 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-768633 Clientid:01:52:54:00:2d:10:49}
	I1017 18:57:19.760903   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined IP address 192.168.39.150 and MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:57:19.761109   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHPort
	I1017 18:57:19.761389   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHKeyPath
	I1017 18:57:19.761627   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHUsername
	I1017 18:57:19.761764   80062 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/addons-768633/id_rsa Username:docker}
	I1017 18:57:20.331143   80062 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1017 18:57:20.583960   80062 addons.go:238] Setting addon gcp-auth=true in "addons-768633"
	I1017 18:57:20.584019   80062 host.go:66] Checking if "addons-768633" exists ...
	I1017 18:57:20.584365   80062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 18:57:20.584417   80062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 18:57:20.599204   80062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33119
	I1017 18:57:20.599771   80062 main.go:141] libmachine: () Calling .GetVersion
	I1017 18:57:20.600365   80062 main.go:141] libmachine: Using API Version  1
	I1017 18:57:20.600389   80062 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 18:57:20.600807   80062 main.go:141] libmachine: () Calling .GetMachineName
	I1017 18:57:20.601400   80062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 18:57:20.601449   80062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 18:57:20.615235   80062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39819
	I1017 18:57:20.615713   80062 main.go:141] libmachine: () Calling .GetVersion
	I1017 18:57:20.616177   80062 main.go:141] libmachine: Using API Version  1
	I1017 18:57:20.616204   80062 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 18:57:20.616585   80062 main.go:141] libmachine: () Calling .GetMachineName
	I1017 18:57:20.616791   80062 main.go:141] libmachine: (addons-768633) Calling .GetState
	I1017 18:57:20.618727   80062 main.go:141] libmachine: (addons-768633) Calling .DriverName
	I1017 18:57:20.618956   80062 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1017 18:57:20.618982   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHHostname
	I1017 18:57:20.622206   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:57:20.622667   80062 main.go:141] libmachine: (addons-768633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:10:49", ip: ""} in network mk-addons-768633: {Iface:virbr1 ExpiryTime:2025-10-17 19:56:44 +0000 UTC Type:0 Mac:52:54:00:2d:10:49 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-768633 Clientid:01:52:54:00:2d:10:49}
	I1017 18:57:20.622696   80062 main.go:141] libmachine: (addons-768633) DBG | domain addons-768633 has defined IP address 192.168.39.150 and MAC address 52:54:00:2d:10:49 in network mk-addons-768633
	I1017 18:57:20.622872   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHPort
	I1017 18:57:20.623082   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHKeyPath
	I1017 18:57:20.623290   80062 main.go:141] libmachine: (addons-768633) Calling .GetSSHUsername
	I1017 18:57:20.623432   80062 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/addons-768633/id_rsa Username:docker}
	I1017 18:57:23.192275   80062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.74231342s)
	I1017 18:57:23.192338   80062 main.go:141] libmachine: Making call to close driver server
	I1017 18:57:23.192353   80062 main.go:141] libmachine: (addons-768633) Calling .Close
	I1017 18:57:23.192364   80062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.71454542s)
	I1017 18:57:23.192420   80062 main.go:141] libmachine: Making call to close driver server
	I1017 18:57:23.192437   80062 main.go:141] libmachine: (addons-768633) Calling .Close
	I1017 18:57:23.192497   80062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.697467844s)
	I1017 18:57:23.192520   80062 main.go:141] libmachine: Making call to close driver server
	I1017 18:57:23.192530   80062 main.go:141] libmachine: (addons-768633) Calling .Close
	I1017 18:57:23.192601   80062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.576086877s)
	I1017 18:57:23.192649   80062 main.go:141] libmachine: Making call to close driver server
	I1017 18:57:23.192664   80062 main.go:141] libmachine: (addons-768633) Calling .Close
	I1017 18:57:23.192760   80062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.416874666s)
	I1017 18:57:23.192776   80062 main.go:141] libmachine: Making call to close driver server
	I1017 18:57:23.192786   80062 main.go:141] libmachine: (addons-768633) Calling .Close
	I1017 18:57:23.192842   80062 main.go:141] libmachine: Successfully made call to close driver server
	I1017 18:57:23.192843   80062 main.go:141] libmachine: (addons-768633) DBG | Closing plugin on server side
	I1017 18:57:23.192849   80062 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 18:57:23.192852   80062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.055332446s)
	I1017 18:57:23.192879   80062 main.go:141] libmachine: Successfully made call to close driver server
	I1017 18:57:23.192881   80062 main.go:141] libmachine: (addons-768633) DBG | Closing plugin on server side
	I1017 18:57:23.192879   80062 main.go:141] libmachine: Making call to close driver server
	I1017 18:57:23.192880   80062 main.go:141] libmachine: (addons-768633) DBG | Closing plugin on server side
	I1017 18:57:23.192887   80062 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 18:57:23.192902   80062 main.go:141] libmachine: Making call to close driver server
	I1017 18:57:23.192906   80062 main.go:141] libmachine: Successfully made call to close driver server
	I1017 18:57:23.192909   80062 main.go:141] libmachine: (addons-768633) Calling .Close
	I1017 18:57:23.192908   80062 main.go:141] libmachine: Successfully made call to close driver server
	I1017 18:57:23.192915   80062 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 18:57:23.192919   80062 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 18:57:23.192924   80062 main.go:141] libmachine: Making call to close driver server
	I1017 18:57:23.192929   80062 main.go:141] libmachine: Making call to close driver server
	I1017 18:57:23.192936   80062 main.go:141] libmachine: (addons-768633) Calling .Close
	I1017 18:57:23.192894   80062 main.go:141] libmachine: (addons-768633) Calling .Close
	I1017 18:57:23.192697   80062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (9.274792891s)
	I1017 18:57:23.192930   80062 main.go:141] libmachine: (addons-768633) Calling .Close
	I1017 18:57:23.192987   80062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.755588531s)
	W1017 18:57:23.192987   80062 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1017 18:57:23.193011   80062 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1017 18:57:23.193016   80062 retry.go:31] will retry after 308.164903ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:23.192860   80062 main.go:141] libmachine: Making call to close driver server
	I1017 18:57:23.193032   80062 retry.go:31] will retry after 160.106891ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1017 18:57:23.193042   80062 main.go:141] libmachine: (addons-768633) Calling .Close
	I1017 18:57:23.193520   80062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.286455584s)
	I1017 18:57:23.193571   80062 main.go:141] libmachine: Making call to close driver server
	I1017 18:57:23.193584   80062 main.go:141] libmachine: (addons-768633) Calling .Close
	I1017 18:57:23.193611   80062 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (6.452405138s)
	I1017 18:57:23.193634   80062 api_server.go:72] duration metric: took 10.984715616s to wait for apiserver process to appear ...
	I1017 18:57:23.193642   80062 api_server.go:88] waiting for apiserver healthz status ...
	I1017 18:57:23.193661   80062 api_server.go:253] Checking apiserver healthz at https://192.168.39.150:8443/healthz ...
	I1017 18:57:23.194146   80062 main.go:141] libmachine: (addons-768633) DBG | Closing plugin on server side
	I1017 18:57:23.194177   80062 main.go:141] libmachine: Successfully made call to close driver server
	I1017 18:57:23.194184   80062 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 18:57:23.194362   80062 main.go:141] libmachine: (addons-768633) DBG | Closing plugin on server side
	I1017 18:57:23.194382   80062 main.go:141] libmachine: Successfully made call to close driver server
	I1017 18:57:23.194387   80062 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 18:57:23.195089   80062 main.go:141] libmachine: (addons-768633) DBG | Closing plugin on server side
	I1017 18:57:23.195236   80062 main.go:141] libmachine: Successfully made call to close driver server
	I1017 18:57:23.195244   80062 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 18:57:23.195590   80062 main.go:141] libmachine: Successfully made call to close driver server
	I1017 18:57:23.195596   80062 main.go:141] libmachine: Successfully made call to close driver server
	I1017 18:57:23.195609   80062 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 18:57:23.195619   80062 addons.go:479] Verifying addon ingress=true in "addons-768633"
	I1017 18:57:23.195619   80062 main.go:141] libmachine: (addons-768633) DBG | Closing plugin on server side
	I1017 18:57:23.195601   80062 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 18:57:23.195686   80062 main.go:141] libmachine: Making call to close driver server
	I1017 18:57:23.195693   80062 main.go:141] libmachine: (addons-768633) Calling .Close
	I1017 18:57:23.195859   80062 main.go:141] libmachine: (addons-768633) DBG | Closing plugin on server side
	I1017 18:57:23.195890   80062 main.go:141] libmachine: Successfully made call to close driver server
	I1017 18:57:23.195899   80062 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 18:57:23.195907   80062 main.go:141] libmachine: Making call to close driver server
	I1017 18:57:23.195914   80062 main.go:141] libmachine: (addons-768633) Calling .Close
	I1017 18:57:23.196918   80062 main.go:141] libmachine: (addons-768633) DBG | Closing plugin on server side
	I1017 18:57:23.196978   80062 main.go:141] libmachine: Successfully made call to close driver server
	I1017 18:57:23.196998   80062 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 18:57:23.197015   80062 main.go:141] libmachine: Making call to close driver server
	I1017 18:57:23.197030   80062 main.go:141] libmachine: (addons-768633) Calling .Close
	I1017 18:57:23.197412   80062 main.go:141] libmachine: Successfully made call to close driver server
	I1017 18:57:23.197432   80062 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 18:57:23.197442   80062 addons.go:479] Verifying addon registry=true in "addons-768633"
	I1017 18:57:23.197809   80062 main.go:141] libmachine: Successfully made call to close driver server
	I1017 18:57:23.197828   80062 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 18:57:23.197766   80062 main.go:141] libmachine: (addons-768633) DBG | Closing plugin on server side
	I1017 18:57:23.197812   80062 main.go:141] libmachine: Successfully made call to close driver server
	I1017 18:57:23.198056   80062 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 18:57:23.198065   80062 addons.go:479] Verifying addon metrics-server=true in "addons-768633"
	I1017 18:57:23.197780   80062 main.go:141] libmachine: (addons-768633) DBG | Closing plugin on server side
	I1017 18:57:23.199044   80062 out.go:179] * Verifying ingress addon...
	I1017 18:57:23.199045   80062 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-768633 service yakd-dashboard -n yakd-dashboard
	
	I1017 18:57:23.199088   80062 out.go:179] * Verifying registry addon...
	I1017 18:57:23.201540   80062 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1017 18:57:23.201724   80062 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1017 18:57:23.223118   80062 api_server.go:279] https://192.168.39.150:8443/healthz returned 200:
	ok
	I1017 18:57:23.228369   80062 api_server.go:141] control plane version: v1.34.1
	I1017 18:57:23.228404   80062 api_server.go:131] duration metric: took 34.754919ms to wait for apiserver health ...
	I1017 18:57:23.228437   80062 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 18:57:23.246022   80062 system_pods.go:59] 17 kube-system pods found
	I1017 18:57:23.246068   80062 system_pods.go:61] "amd-gpu-device-plugin-tfnp7" [4ab280fd-be63-47d2-97e7-a8202afe7127] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1017 18:57:23.246080   80062 system_pods.go:61] "coredns-66bc5c9577-hcp9r" [ba9e4429-23a0-4dc3-9de8-f1dda1a00999] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 18:57:23.246090   80062 system_pods.go:61] "coredns-66bc5c9577-wd4gb" [3d966ac8-a12e-4c72-88f0-4a3184c4d013] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 18:57:23.246097   80062 system_pods.go:61] "etcd-addons-768633" [1dc0ee45-3cb0-4e8f-89d0-2c6cb6138da5] Running
	I1017 18:57:23.246108   80062 system_pods.go:61] "kube-apiserver-addons-768633" [ff68a08d-51ce-4a08-81f4-f27d9d94a1ad] Running
	I1017 18:57:23.246112   80062 system_pods.go:61] "kube-controller-manager-addons-768633" [a037f775-bac9-4492-a293-9dadddb66cd7] Running
	I1017 18:57:23.246120   80062 system_pods.go:61] "kube-ingress-dns-minikube" [4217219b-58ba-497d-a00e-99b6ad7cfc85] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1017 18:57:23.246128   80062 system_pods.go:61] "kube-proxy-dnjlc" [02af884b-a081-45d9-8441-d3dd959250c9] Running
	I1017 18:57:23.246134   80062 system_pods.go:61] "kube-scheduler-addons-768633" [30fd3d88-2dda-4ad9-adce-32ba70ef594b] Running
	I1017 18:57:23.246145   80062 system_pods.go:61] "metrics-server-85b7d694d7-5fqt5" [69ada210-4511-40aa-b098-0b90b6815015] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 18:57:23.246156   80062 system_pods.go:61] "nvidia-device-plugin-daemonset-rk98j" [ba1839a9-838a-471c-bad5-74ae4ea0fbab] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1017 18:57:23.246167   80062 system_pods.go:61] "registry-6b586f9694-hqf8t" [35c22bac-fb0b-47ed-a059-1d4ce279275b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1017 18:57:23.246180   80062 system_pods.go:61] "registry-creds-764b6fb674-r9bj7" [fa942e79-1265-4812-8e82-6d35fc0fc9ce] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1017 18:57:23.246191   80062 system_pods.go:61] "registry-proxy-v6ggf" [df1a7ef1-9163-4776-9a9c-ac545ca6ecc0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1017 18:57:23.246199   80062 system_pods.go:61] "snapshot-controller-7d9fbc56b8-cj6hw" [4781f6ba-eb3f-4619-a4f7-5fe42370d22b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:57:23.246210   80062 system_pods.go:61] "snapshot-controller-7d9fbc56b8-ftxns" [766291a4-1362-4bb7-a3d6-dfcd38ce1299] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:57:23.246220   80062 system_pods.go:61] "storage-provisioner" [3a3fade3-84ac-4d8a-a78c-e90455760bfb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 18:57:23.246232   80062 system_pods.go:74] duration metric: took 17.787096ms to wait for pod list to return data ...
	I1017 18:57:23.246251   80062 default_sa.go:34] waiting for default service account to be created ...
	I1017 18:57:23.247893   80062 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1017 18:57:23.247918   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:23.248211   80062 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1017 18:57:23.248233   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:23.262936   80062 default_sa.go:45] found service account: "default"
	I1017 18:57:23.262973   80062 default_sa.go:55] duration metric: took 16.711014ms for default service account to be created ...
	I1017 18:57:23.262987   80062 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 18:57:23.284998   80062 system_pods.go:86] 17 kube-system pods found
	I1017 18:57:23.285050   80062 system_pods.go:89] "amd-gpu-device-plugin-tfnp7" [4ab280fd-be63-47d2-97e7-a8202afe7127] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1017 18:57:23.285062   80062 system_pods.go:89] "coredns-66bc5c9577-hcp9r" [ba9e4429-23a0-4dc3-9de8-f1dda1a00999] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 18:57:23.285078   80062 system_pods.go:89] "coredns-66bc5c9577-wd4gb" [3d966ac8-a12e-4c72-88f0-4a3184c4d013] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 18:57:23.285087   80062 system_pods.go:89] "etcd-addons-768633" [1dc0ee45-3cb0-4e8f-89d0-2c6cb6138da5] Running
	I1017 18:57:23.285099   80062 system_pods.go:89] "kube-apiserver-addons-768633" [ff68a08d-51ce-4a08-81f4-f27d9d94a1ad] Running
	I1017 18:57:23.285109   80062 system_pods.go:89] "kube-controller-manager-addons-768633" [a037f775-bac9-4492-a293-9dadddb66cd7] Running
	I1017 18:57:23.285119   80062 system_pods.go:89] "kube-ingress-dns-minikube" [4217219b-58ba-497d-a00e-99b6ad7cfc85] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1017 18:57:23.285129   80062 system_pods.go:89] "kube-proxy-dnjlc" [02af884b-a081-45d9-8441-d3dd959250c9] Running
	I1017 18:57:23.285139   80062 system_pods.go:89] "kube-scheduler-addons-768633" [30fd3d88-2dda-4ad9-adce-32ba70ef594b] Running
	I1017 18:57:23.285149   80062 system_pods.go:89] "metrics-server-85b7d694d7-5fqt5" [69ada210-4511-40aa-b098-0b90b6815015] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 18:57:23.285167   80062 system_pods.go:89] "nvidia-device-plugin-daemonset-rk98j" [ba1839a9-838a-471c-bad5-74ae4ea0fbab] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1017 18:57:23.285178   80062 system_pods.go:89] "registry-6b586f9694-hqf8t" [35c22bac-fb0b-47ed-a059-1d4ce279275b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1017 18:57:23.285187   80062 system_pods.go:89] "registry-creds-764b6fb674-r9bj7" [fa942e79-1265-4812-8e82-6d35fc0fc9ce] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1017 18:57:23.285197   80062 system_pods.go:89] "registry-proxy-v6ggf" [df1a7ef1-9163-4776-9a9c-ac545ca6ecc0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1017 18:57:23.285207   80062 system_pods.go:89] "snapshot-controller-7d9fbc56b8-cj6hw" [4781f6ba-eb3f-4619-a4f7-5fe42370d22b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:57:23.285222   80062 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ftxns" [766291a4-1362-4bb7-a3d6-dfcd38ce1299] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:57:23.285234   80062 system_pods.go:89] "storage-provisioner" [3a3fade3-84ac-4d8a-a78c-e90455760bfb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 18:57:23.285251   80062 system_pods.go:126] duration metric: took 22.25423ms to wait for k8s-apps to be running ...
	I1017 18:57:23.285266   80062 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 18:57:23.285339   80062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 18:57:23.301382   80062 main.go:141] libmachine: Making call to close driver server
	I1017 18:57:23.301417   80062 main.go:141] libmachine: (addons-768633) Calling .Close
	I1017 18:57:23.301786   80062 main.go:141] libmachine: Successfully made call to close driver server
	I1017 18:57:23.301809   80062 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 18:57:23.353348   80062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1017 18:57:23.501393   80062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:23.721863   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:23.721983   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:24.226250   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:24.324824   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:24.338852   80062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.914281932s)
	I1017 18:57:24.338897   80062 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.719916964s)
	I1017 18:57:24.338915   80062 main.go:141] libmachine: Making call to close driver server
	I1017 18:57:24.338932   80062 main.go:141] libmachine: (addons-768633) Calling .Close
	I1017 18:57:24.338949   80062 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.053582148s)
	I1017 18:57:24.339000   80062 system_svc.go:56] duration metric: took 1.053729113s WaitForService to wait for kubelet
	I1017 18:57:24.339085   80062 kubeadm.go:586] duration metric: took 12.130161947s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 18:57:24.339110   80062 node_conditions.go:102] verifying NodePressure condition ...
	I1017 18:57:24.339289   80062 main.go:141] libmachine: (addons-768633) DBG | Closing plugin on server side
	I1017 18:57:24.339343   80062 main.go:141] libmachine: Successfully made call to close driver server
	I1017 18:57:24.339361   80062 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 18:57:24.339377   80062 main.go:141] libmachine: Making call to close driver server
	I1017 18:57:24.339387   80062 main.go:141] libmachine: (addons-768633) Calling .Close
	I1017 18:57:24.339610   80062 main.go:141] libmachine: Successfully made call to close driver server
	I1017 18:57:24.339623   80062 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 18:57:24.339647   80062 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-768633"
	I1017 18:57:24.340762   80062 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1017 18:57:24.341740   80062 out.go:179] * Verifying csi-hostpath-driver addon...
	I1017 18:57:24.343338   80062 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1017 18:57:24.343990   80062 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1017 18:57:24.344503   80062 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1017 18:57:24.344527   80062 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1017 18:57:24.381794   80062 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1017 18:57:24.381823   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:24.413492   80062 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1017 18:57:24.413529   80062 node_conditions.go:123] node cpu capacity is 2
	I1017 18:57:24.413569   80062 node_conditions.go:105] duration metric: took 74.452165ms to run NodePressure ...
	I1017 18:57:24.413585   80062 start.go:241] waiting for startup goroutines ...
	I1017 18:57:24.590917   80062 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1017 18:57:24.590953   80062 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1017 18:57:24.711179   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:24.713154   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:24.767579   80062 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1017 18:57:24.767610   80062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1017 18:57:24.852824   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:24.922343   80062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1017 18:57:25.216127   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:25.216331   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:25.357359   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:25.708571   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:25.709453   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:25.851200   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:26.210408   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:26.212993   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:26.359497   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:26.634292   80062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.28088326s)
	I1017 18:57:26.634357   80062 main.go:141] libmachine: Making call to close driver server
	I1017 18:57:26.634374   80062 main.go:141] libmachine: (addons-768633) Calling .Close
	I1017 18:57:26.634722   80062 main.go:141] libmachine: (addons-768633) DBG | Closing plugin on server side
	I1017 18:57:26.634743   80062 main.go:141] libmachine: Successfully made call to close driver server
	I1017 18:57:26.634795   80062 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 18:57:26.634816   80062 main.go:141] libmachine: Making call to close driver server
	I1017 18:57:26.634827   80062 main.go:141] libmachine: (addons-768633) Calling .Close
	I1017 18:57:26.635102   80062 main.go:141] libmachine: Successfully made call to close driver server
	I1017 18:57:26.635123   80062 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 18:57:26.763312   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:26.836060   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:26.886249   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:27.109271   80062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.607830694s)
	W1017 18:57:27.109319   80062 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:27.109375   80062 retry.go:31] will retry after 261.287418ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:27.109469   80062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.187074726s)
	I1017 18:57:27.109530   80062 main.go:141] libmachine: Making call to close driver server
	I1017 18:57:27.109566   80062 main.go:141] libmachine: (addons-768633) Calling .Close
	I1017 18:57:27.109910   80062 main.go:141] libmachine: (addons-768633) DBG | Closing plugin on server side
	I1017 18:57:27.109966   80062 main.go:141] libmachine: Successfully made call to close driver server
	I1017 18:57:27.109979   80062 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 18:57:27.109992   80062 main.go:141] libmachine: Making call to close driver server
	I1017 18:57:27.110002   80062 main.go:141] libmachine: (addons-768633) Calling .Close
	I1017 18:57:27.110280   80062 main.go:141] libmachine: Successfully made call to close driver server
	I1017 18:57:27.110297   80062 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 18:57:27.110304   80062 main.go:141] libmachine: (addons-768633) DBG | Closing plugin on server side
	I1017 18:57:27.111529   80062 addons.go:479] Verifying addon gcp-auth=true in "addons-768633"
	I1017 18:57:27.113395   80062 out.go:179] * Verifying gcp-auth addon...
	I1017 18:57:27.115501   80062 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1017 18:57:27.124727   80062 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1017 18:57:27.124767   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:27.207958   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:27.209439   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:27.354799   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:27.371728   80062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:27.622672   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:27.724033   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:27.725969   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:27.855687   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:28.121890   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:28.210387   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:28.211564   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:28.352708   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:28.623467   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:28.723289   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:28.723746   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:28.726379   80062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.354607681s)
	W1017 18:57:28.726412   80062 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:28.726431   80062 retry.go:31] will retry after 304.348591ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:28.850752   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:29.031849   80062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:29.119544   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:29.209238   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:29.211296   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:29.350884   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:29.622398   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:29.712168   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:29.716761   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:29.857928   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:30.121114   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:30.206883   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:30.209378   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:30.350504   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:30.455962   80062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.424033431s)
	W1017 18:57:30.456030   80062 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:30.456060   80062 retry.go:31] will retry after 448.952761ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:30.621528   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:30.711458   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:30.711663   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:30.848277   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:30.906034   80062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:31.121889   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:31.212224   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:31.212408   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:31.348855   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:31.619273   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:31.708466   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:31.710000   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:31.887582   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:32.055801   80062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.149721571s)
	W1017 18:57:32.055860   80062 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:32.055885   80062 retry.go:31] will retry after 1.270697538s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:32.120144   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:32.210002   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:32.211825   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:32.352507   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:32.622280   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:32.710400   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:32.712782   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:32.852693   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:33.121756   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:33.212967   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:33.213243   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:33.327399   80062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:33.352585   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:33.619862   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:33.708711   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:33.712143   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:33.851786   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:34.118958   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:34.206030   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:34.206032   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:34.350467   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:34.437957   80062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.110492969s)
	W1017 18:57:34.438017   80062 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:34.438051   80062 retry.go:31] will retry after 1.480034671s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:34.623244   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:34.708843   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:34.711045   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:34.867875   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:35.120562   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:35.206473   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:35.207733   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:35.349512   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:35.618758   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:35.705436   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:35.705645   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:35.858765   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:35.918761   80062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:36.289097   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:36.290731   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:36.293469   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:36.350797   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:36.623084   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:36.707850   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:36.709316   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:36.855976   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:37.119922   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:37.136295   80062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.217484957s)
	W1017 18:57:37.136349   80062 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:37.136376   80062 retry.go:31] will retry after 3.2908534s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:37.210384   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:37.211386   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:37.348410   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:37.890073   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:37.917776   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:37.918260   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:37.919185   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:38.120792   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:38.210513   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:38.210782   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:38.349233   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:38.695933   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:38.709976   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:38.712535   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:38.854568   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:39.121416   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:39.210173   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:39.211483   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:39.349960   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:39.620062   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:39.706354   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:39.707162   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:39.864155   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:40.119646   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:40.205346   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:40.206515   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:40.351230   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:40.428363   80062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:40.621938   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:40.705383   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:40.705448   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:41.049155   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:41.120337   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:41.220810   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:41.222606   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:41.365429   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:41.619487   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:41.708830   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:41.709487   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:41.753421   80062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.325012019s)
	W1017 18:57:41.753460   80062 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:41.753485   80062 retry.go:31] will retry after 2.837757264s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:41.857288   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:42.121111   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:42.207824   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:42.207913   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:42.348216   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:42.625821   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:42.705456   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:42.708063   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:43.015594   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:43.120787   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:43.206809   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:43.207024   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:43.350222   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:43.622787   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:43.722114   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:43.722521   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:43.849144   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:44.119223   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:44.206041   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:44.206116   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:44.347749   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:44.592033   80062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:44.620876   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:44.707079   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:44.709308   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:44.852573   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:45.122945   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:45.206624   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:45.209214   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:45.349934   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:45.620820   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:45.706325   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:45.710252   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:45.833758   80062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.241677345s)
	W1017 18:57:45.833810   80062 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:45.833841   80062 retry.go:31] will retry after 3.518970599s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:45.854953   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:46.120742   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:46.206312   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:46.207402   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:46.349140   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:46.619688   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:46.711064   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:46.711644   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:46.856466   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:47.136773   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:47.208805   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:47.209326   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:47.348828   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:47.621752   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:47.706512   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:47.708847   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:47.849743   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:48.119147   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:48.206122   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:48.208802   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:48.349634   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:48.619526   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:48.706506   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:48.706821   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:48.850762   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:49.121215   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:49.208786   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:49.210137   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:49.347885   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:49.353781   80062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:49.622054   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:49.712349   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:49.712451   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:49.856457   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:50.121249   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:50.207093   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:50.212259   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:50.348353   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:50.409782   80062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.055944248s)
	W1017 18:57:50.409849   80062 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:50.409879   80062 retry.go:31] will retry after 10.51858844s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:50.625366   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:50.727930   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:50.728234   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:50.848612   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:51.121037   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:51.209357   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:51.209942   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:51.352966   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:51.889593   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:51.890052   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:51.892690   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:51.892737   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:52.120908   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:52.205917   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:52.206090   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:52.348856   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:52.621337   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:52.706939   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:52.710848   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:52.861045   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:53.121265   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:53.382921   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:53.385861   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:53.386524   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:53.619166   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:53.718321   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:53.719216   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:53.849093   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:54.119455   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:54.206415   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:54.206651   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:54.349272   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:54.619579   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:54.707127   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:54.707267   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:55.034068   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:55.120618   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:55.209603   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:55.210216   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:55.347994   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:55.624277   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:55.709277   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:55.709481   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:55.852829   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:56.122075   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:56.207660   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:56.207995   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:56.348473   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:56.622469   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:56.707245   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:56.709724   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:56.852044   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:57.121259   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:57.206498   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:57.208890   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:57.347492   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:57.627736   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:57.706967   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:57.709307   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:57.849146   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:58.121880   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:58.205837   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:58.207426   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:58.471301   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:58.620834   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:58.708037   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:58.710893   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:58.851535   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:59.122778   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:59.205210   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:59.208755   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:59.349680   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:59.626270   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:59.711865   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:59.712560   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:59.857265   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:00.120069   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:00.206311   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:00.207227   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:00.350879   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:00.621937   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:00.709540   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:00.710198   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:00.855592   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:00.929700   80062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:58:01.120823   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:01.206817   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:01.209447   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:01.350691   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:01.624139   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:01.709461   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:01.711578   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:01.850330   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:02.124126   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:02.208046   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:02.209253   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:02.244025   80062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.314270116s)
	W1017 18:58:02.244107   80062 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:58:02.244136   80062 retry.go:31] will retry after 15.570204797s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:58:02.352299   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:02.619763   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:02.708697   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:02.711595   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:02.857367   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:03.120045   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:03.206177   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:03.210605   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:03.349098   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:03.620971   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:03.722042   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:03.722511   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:03.857856   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:04.119026   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:04.206911   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:04.210694   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:04.385472   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:04.620320   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:04.707570   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:04.708167   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:04.859520   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:05.121922   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:05.217157   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:05.217542   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:05.348503   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:05.620200   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:05.724214   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:05.724697   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:05.854630   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:06.120165   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:06.207203   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:06.207395   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:06.348853   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:06.620179   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:06.706203   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:06.708329   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:06.855595   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:07.119924   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:07.205618   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:07.206324   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:07.353445   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:07.622998   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:07.711315   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:07.712010   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:07.857489   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:08.126111   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:08.209505   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:08.211531   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:08.349111   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:08.622956   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:08.724009   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:08.724432   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:08.850175   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:09.120535   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:09.206575   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:09.206667   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:09.348127   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:09.622874   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:09.725248   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:09.725436   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:09.851996   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:10.121241   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:10.209937   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:10.210942   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:10.352708   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:10.623917   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:10.708107   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:10.711639   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:10.857414   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:11.124811   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:11.208764   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:11.211539   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:11.350886   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:11.620352   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:11.705345   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:11.707797   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:11.851784   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:12.120037   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:12.207051   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:12.209170   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:12.349645   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:12.620486   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:12.706462   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:12.706541   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:12.849501   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:13.119565   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:13.206323   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:13.208339   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:13.348365   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:13.619632   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:13.706792   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:13.708115   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:13.851196   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:14.120993   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:14.205862   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:14.206566   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:14.350863   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:14.623416   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:14.710099   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:14.712899   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:14.855028   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:15.219524   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:15.219759   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:15.219951   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:15.348624   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:15.622304   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:15.706844   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:15.709430   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:15.847881   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:16.120503   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:16.206008   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:16.207724   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:16.351791   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:16.619956   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:16.708117   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:16.708418   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:16.857734   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:17.119019   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:17.206217   80062 kapi.go:107] duration metric: took 54.00448329s to wait for kubernetes.io/minikube-addons=registry ...
	I1017 18:58:17.206499   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:17.351469   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:17.620231   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:17.709407   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:17.814534   80062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:58:17.853235   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:18.228711   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:18.241563   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:18.350285   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:18.619618   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:18.706323   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:18.858089   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:18.986474   80062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.171878247s)
	W1017 18:58:18.986532   80062 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:58:18.986574   80062 retry.go:31] will retry after 23.893577813s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:58:19.125045   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:19.227747   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:19.349438   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:19.618331   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:19.708265   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:19.850114   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:20.124396   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:20.228297   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:20.347662   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:20.619211   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:20.705883   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:20.850882   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:21.121790   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:21.206500   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:21.348434   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:21.620189   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:21.707081   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:21.849441   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:22.127782   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:22.210446   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:22.348614   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:22.620824   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:22.705613   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:22.848934   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:23.120798   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:23.207096   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:23.350720   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:23.618684   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:23.707776   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:23.854377   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:24.289327   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:24.289448   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:24.364574   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:24.619459   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:24.706113   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:24.850587   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:25.122030   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:25.212056   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:25.348516   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:25.621346   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:25.711265   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:25.847829   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:26.124808   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:26.221993   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:26.351422   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:26.620776   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:26.706461   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:26.850541   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:27.124591   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:27.215733   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:27.349571   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:27.619885   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:27.705717   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:27.848907   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:28.119146   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:28.207923   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:28.351212   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:28.623095   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:28.708826   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:28.850714   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:29.127741   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:29.229647   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:29.353471   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:29.622569   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:29.708161   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:29.863873   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:30.127413   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:30.208787   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:30.350146   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:30.813734   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:30.813915   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:30.913185   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:31.119290   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:31.208114   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:31.349370   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:31.621072   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:31.706712   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:31.851053   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:32.121202   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:32.206842   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:32.354248   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:32.621462   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:32.707659   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:32.850850   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:33.119674   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:33.206693   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:33.348766   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:33.620715   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:33.704691   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:33.847819   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:34.118671   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:34.204904   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:34.356428   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:34.621694   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:34.706824   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:34.851381   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:35.121497   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:35.209230   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:35.349353   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:35.622534   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:35.706857   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:35.851785   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:36.120970   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:36.206363   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:36.350578   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:36.744074   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:36.751021   80062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:36.848346   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:37.121015   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:37.205878   80062 kapi.go:107] duration metric: took 1m14.004335866s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1017 18:58:37.350389   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:37.622972   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:37.848062   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:38.119501   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:38.349008   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:38.716015   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:38.847966   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:39.121431   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:39.353584   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:39.623720   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:39.853049   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:40.119209   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:40.348891   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:40.619259   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:40.848687   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:41.120387   80062 kapi.go:107] duration metric: took 1m14.00488496s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1017 18:58:41.122017   80062 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-768633 cluster.
	I1017 18:58:41.123179   80062 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1017 18:58:41.124256   80062 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1017 18:58:41.347832   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:41.851472   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:42.358064   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:42.849655   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:42.880725   80062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:58:43.346960   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:58:43.833387   80062 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:58:43.833469   80062 main.go:141] libmachine: Making call to close driver server
	I1017 18:58:43.833484   80062 main.go:141] libmachine: (addons-768633) Calling .Close
	I1017 18:58:43.833781   80062 main.go:141] libmachine: (addons-768633) DBG | Closing plugin on server side
	I1017 18:58:43.833826   80062 main.go:141] libmachine: Successfully made call to close driver server
	I1017 18:58:43.833835   80062 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 18:58:43.833844   80062 main.go:141] libmachine: Making call to close driver server
	I1017 18:58:43.833851   80062 main.go:141] libmachine: (addons-768633) Calling .Close
	I1017 18:58:43.834094   80062 main.go:141] libmachine: Successfully made call to close driver server
	I1017 18:58:43.834118   80062 main.go:141] libmachine: Making call to close connection to plugin binary
	W1017 18:58:43.834246   80062 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1017 18:58:43.848096   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:44.348484   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:44.848603   80062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:45.348819   80062 kapi.go:107] duration metric: took 1m21.004824714s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1017 18:58:45.350776   80062 out.go:179] * Enabled addons: registry-creds, nvidia-device-plugin, amd-gpu-device-plugin, ingress-dns, default-storageclass, cloud-spanner, storage-provisioner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1017 18:58:45.352258   80062 addons.go:514] duration metric: took 1m33.143307632s for enable addons: enabled=[registry-creds nvidia-device-plugin amd-gpu-device-plugin ingress-dns default-storageclass cloud-spanner storage-provisioner metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1017 18:58:45.352307   80062 start.go:246] waiting for cluster config update ...
	I1017 18:58:45.352326   80062 start.go:255] writing updated cluster config ...
	I1017 18:58:45.352642   80062 ssh_runner.go:195] Run: rm -f paused
	I1017 18:58:45.360627   80062 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 18:58:45.450215   80062 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hcp9r" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:58:45.455693   80062 pod_ready.go:94] pod "coredns-66bc5c9577-hcp9r" is "Ready"
	I1017 18:58:45.455721   80062 pod_ready.go:86] duration metric: took 5.4722ms for pod "coredns-66bc5c9577-hcp9r" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:58:45.457948   80062 pod_ready.go:83] waiting for pod "etcd-addons-768633" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:58:45.465681   80062 pod_ready.go:94] pod "etcd-addons-768633" is "Ready"
	I1017 18:58:45.465706   80062 pod_ready.go:86] duration metric: took 7.738021ms for pod "etcd-addons-768633" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:58:45.468538   80062 pod_ready.go:83] waiting for pod "kube-apiserver-addons-768633" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:58:45.473661   80062 pod_ready.go:94] pod "kube-apiserver-addons-768633" is "Ready"
	I1017 18:58:45.473689   80062 pod_ready.go:86] duration metric: took 5.115059ms for pod "kube-apiserver-addons-768633" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:58:45.475682   80062 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-768633" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:58:45.766177   80062 pod_ready.go:94] pod "kube-controller-manager-addons-768633" is "Ready"
	I1017 18:58:45.766203   80062 pod_ready.go:86] duration metric: took 290.500847ms for pod "kube-controller-manager-addons-768633" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:58:45.964671   80062 pod_ready.go:83] waiting for pod "kube-proxy-dnjlc" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:58:46.365003   80062 pod_ready.go:94] pod "kube-proxy-dnjlc" is "Ready"
	I1017 18:58:46.365031   80062 pod_ready.go:86] duration metric: took 400.324391ms for pod "kube-proxy-dnjlc" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:58:46.566511   80062 pod_ready.go:83] waiting for pod "kube-scheduler-addons-768633" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:58:46.965784   80062 pod_ready.go:94] pod "kube-scheduler-addons-768633" is "Ready"
	I1017 18:58:46.965812   80062 pod_ready.go:86] duration metric: took 399.273738ms for pod "kube-scheduler-addons-768633" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:58:46.965823   80062 pod_ready.go:40] duration metric: took 1.605163322s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 18:58:47.010210   80062 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1017 18:58:47.011988   80062 out.go:179] * Done! kubectl is now configured to use "addons-768633" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 17 19:01:42 addons-768633 crio[823]: time="2025-10-17 19:01:42.033386507Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760727702033355335,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606631,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7100e247-1c0d-453b-b174-b9f43c6bda2c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 17 19:01:42 addons-768633 crio[823]: time="2025-10-17 19:01:42.034559843Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a7a5ac52-7614-428d-9fae-e1799a5008c3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 19:01:42 addons-768633 crio[823]: time="2025-10-17 19:01:42.034766655Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a7a5ac52-7614-428d-9fae-e1799a5008c3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 19:01:42 addons-768633 crio[823]: time="2025-10-17 19:01:42.035314096Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:be0730fa76904265c9a7b21afb22de2c3c93e72ae03261a213860188f5d5a184,PodSandboxId:0276be094ce5d4f0b9ee9381d96087ce47bb61e4264b7e47416fe257fdd974f8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1760727701886177415,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d498dc89-2p5zd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 239806dd-ed71-40e6-ad0b-c47680e70554,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a5c35431cb3aa9c012ac40b4303d6e06b83691261839b8fbb8b2482a8b5afd7,PodSandboxId:2ffeb38765fbfe6971a195f5cdc0badb42c7c9412619d1b44c325ba990e58b9a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760727558250219485,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb0421a5-e7d4-4c0e-905f-a1e7cda960ac,},Annotations:map[string]string{io.kubernete
s.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f1f71ac95e0f4dde98d22dfe3650b227d4f591dc58bc22c1e1cbd068706b7f,PodSandboxId:e9386740b714f8d273eff64ea0854ec6b467fc4becddebe3182aec3db87a8cf3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760727530968375311,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a170d263-cd6b-4c5a-af
f4-09a23f0f9b95,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d463186ed5ae13c7c91a2458461ffc2bee5d6cd00c961ed1ca39d0dffce3e15,PodSandboxId:9c83cb9acfc25f5e15774718253909e64ddad95a1ed03be625f8ab5fcf57f8c4,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760727516901653565,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-w4dgz,io.kubernetes.pod.namespace: ingress-nginx,io
.kubernetes.pod.uid: a2a5385f-e566-4891-9b27-c28588c44300,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:bbfacea85569a29aafc1ddd63163174113010c5205d0bf3f2b3e57beff6bfc69,PodSandboxId:a233b207f6342ba9fdd67546b33d6b97b56e07b1b705d5aa9268d45c07d70a0a,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc367
3693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760727500926967186,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-27ffl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d38f0d16-0260-4e54-bc7f-912e60472a5f,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cdc876eecb5fd0dbc62d4d250a56eba4e0707632cd04aacae565b2bafbcfb7a,PodSandboxId:79f06a0f20b3d4cb62a6fdf0c571cacda84d17a369847757bfd276d51030e2fd,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd
94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760727500792894166,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-rqvzx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1da0bd8c-7a8d-4d34-a729-fe786643d7eb,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321695e589a9e8cd72e7032f77ea9200d2c5adc3c9484f4549f29789fff87a59,PodSandboxId:aa1da063711746983dbb5d161b0546e7e27bc66bf441be243cb397a00a74f483,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor
-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760727493870324453,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-xnjmh,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: ab42516c-6fe5-434c-8040-ef81e89c7bc6,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e620c1650412cdfedcca71fa158f3926f3a95433e0d7fb94206b2b2d6dfddfe,PodSandboxId:240baee7059afd86b9f767a0b1ffbf69ba14ba10412ce3690d16704ffd71ef06,Metadata:&Con
tainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760727483322913482,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4217219b-58ba-497d-a00e-99b6ad7cfc85,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f01c81dd4bcf10f0ce4ee
1c236b69510dd362a85e7ecb92743ed61545fd63db0,PodSandboxId:c1fb35aba95a0dd5755bb91c1c191a025e2db747c38e370904f3a94b70fdde11,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760727445520746797,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a3fade3-84ac-4d8a-a78c-e90455760bfb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b781d9d138eae1827ea5662cd2e2255
722afb006859d6f506c72a5c95edd83,PodSandboxId:fe45ec4ab4bc979fdbc4755f88f21eca312cd34b3995efc4784faaa60ef1185f,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760727441045210613,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-tfnp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab280fd-be63-47d2-97e7-a8202afe7127,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:291e1ca6e8bd174cbb9010cf4e6251a9f4ac39f8b3f7a2b85a40fa6f7b57018f,PodSandboxId:ca32336983160cbc2d45292edb97d95f93a4ff8888812e2d5812bc956c59af2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760727434108294742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-hcp9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba9e4429-23a0-4dc3-9de8-f1dda1a00999,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\
",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:734d8668ed47e51ac15a16bd24aefbbfc0305ad4f5b0aafca5b9786a3566cc6c,PodSandboxId:63c26ce8d628411f2182dac2ce39ba9953d322bae458ca790a3f3aeb96c74420,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760727433313335762,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dnjlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02af884b-a081-45d9
-8441-d3dd959250c9,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f428ab4d86406223481dee5a7d0181511d01f82be789ee5af1d1f70a7ba07e4e,PodSandboxId:54cd9c5416dfdcacfee0a22319d60f275549df82361be16207804ba2e84ec363,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760727421079683992,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-768633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1bf8fe57a4f9194ad680b48c2ea630
5,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d402a702463f4ac7cac87444d0a29c6a2cd4877549704277257f0db0d25aa292,PodSandboxId:ac28f2b52ccae2d15aba60b8088454ebf57c24d4674fa5df03ab5aefc224bb22,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760727421096760503,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-add
ons-768633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4332981c9701bc1e3976cb5a0b5a939c,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c204072ef3685d5ea3f804b111bc2137fc8537426751dcfe7bcfed3ce93d97f0,PodSandboxId:988e36ba54a90331fe8ebddec48e4248720322b341310d73ac0e881a866291d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760727421027564325
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-768633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04349ca6a27cf95a96b4f4762a9355be,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be38e0026d7efcbd60974c798c272870d09eb6d8465fad637d3fa0592eb8d72f,PodSandboxId:14a4d146f0fb8dafa99225756e1340ea2907269f7d8908d5c9b99defa64c9233,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f
5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760727421030324946,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-768633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 923979fcf8065f3918003e2e54125f1b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a7a5ac52-7614-428d-9fae-e1799a5008c3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 19:01:42 addons-768633 crio[823]: time="2025-10-17 19:01:42.079895222Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=384aafbf-5fb6-41d7-aef7-337ad6dca7f0 name=/runtime.v1.RuntimeService/Version
	Oct 17 19:01:42 addons-768633 crio[823]: time="2025-10-17 19:01:42.079974435Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=384aafbf-5fb6-41d7-aef7-337ad6dca7f0 name=/runtime.v1.RuntimeService/Version
	Oct 17 19:01:42 addons-768633 crio[823]: time="2025-10-17 19:01:42.081370354Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3576269b-ab26-49b6-871b-81f5466e53e2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 17 19:01:42 addons-768633 crio[823]: time="2025-10-17 19:01:42.082775765Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760727702082720586,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606631,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3576269b-ab26-49b6-871b-81f5466e53e2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 17 19:01:42 addons-768633 crio[823]: time="2025-10-17 19:01:42.083551949Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0d422970-e6d9-4c4b-9375-5b9ef6fcdff1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 19:01:42 addons-768633 crio[823]: time="2025-10-17 19:01:42.083616559Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0d422970-e6d9-4c4b-9375-5b9ef6fcdff1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 19:01:42 addons-768633 crio[823]: time="2025-10-17 19:01:42.084951645Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:be0730fa76904265c9a7b21afb22de2c3c93e72ae03261a213860188f5d5a184,PodSandboxId:0276be094ce5d4f0b9ee9381d96087ce47bb61e4264b7e47416fe257fdd974f8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1760727701886177415,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d498dc89-2p5zd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 239806dd-ed71-40e6-ad0b-c47680e70554,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a5c35431cb3aa9c012ac40b4303d6e06b83691261839b8fbb8b2482a8b5afd7,PodSandboxId:2ffeb38765fbfe6971a195f5cdc0badb42c7c9412619d1b44c325ba990e58b9a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760727558250219485,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb0421a5-e7d4-4c0e-905f-a1e7cda960ac,},Annotations:map[string]string{io.kubernete
s.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f1f71ac95e0f4dde98d22dfe3650b227d4f591dc58bc22c1e1cbd068706b7f,PodSandboxId:e9386740b714f8d273eff64ea0854ec6b467fc4becddebe3182aec3db87a8cf3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760727530968375311,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a170d263-cd6b-4c5a-af
f4-09a23f0f9b95,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d463186ed5ae13c7c91a2458461ffc2bee5d6cd00c961ed1ca39d0dffce3e15,PodSandboxId:9c83cb9acfc25f5e15774718253909e64ddad95a1ed03be625f8ab5fcf57f8c4,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760727516901653565,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-w4dgz,io.kubernetes.pod.namespace: ingress-nginx,io
.kubernetes.pod.uid: a2a5385f-e566-4891-9b27-c28588c44300,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:bbfacea85569a29aafc1ddd63163174113010c5205d0bf3f2b3e57beff6bfc69,PodSandboxId:a233b207f6342ba9fdd67546b33d6b97b56e07b1b705d5aa9268d45c07d70a0a,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc367
3693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760727500926967186,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-27ffl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d38f0d16-0260-4e54-bc7f-912e60472a5f,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cdc876eecb5fd0dbc62d4d250a56eba4e0707632cd04aacae565b2bafbcfb7a,PodSandboxId:79f06a0f20b3d4cb62a6fdf0c571cacda84d17a369847757bfd276d51030e2fd,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd
94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760727500792894166,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-rqvzx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1da0bd8c-7a8d-4d34-a729-fe786643d7eb,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321695e589a9e8cd72e7032f77ea9200d2c5adc3c9484f4549f29789fff87a59,PodSandboxId:aa1da063711746983dbb5d161b0546e7e27bc66bf441be243cb397a00a74f483,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor
-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760727493870324453,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-xnjmh,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: ab42516c-6fe5-434c-8040-ef81e89c7bc6,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e620c1650412cdfedcca71fa158f3926f3a95433e0d7fb94206b2b2d6dfddfe,PodSandboxId:240baee7059afd86b9f767a0b1ffbf69ba14ba10412ce3690d16704ffd71ef06,Metadata:&Con
tainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760727483322913482,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4217219b-58ba-497d-a00e-99b6ad7cfc85,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f01c81dd4bcf10f0ce4ee
1c236b69510dd362a85e7ecb92743ed61545fd63db0,PodSandboxId:c1fb35aba95a0dd5755bb91c1c191a025e2db747c38e370904f3a94b70fdde11,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760727445520746797,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a3fade3-84ac-4d8a-a78c-e90455760bfb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b781d9d138eae1827ea5662cd2e2255
722afb006859d6f506c72a5c95edd83,PodSandboxId:fe45ec4ab4bc979fdbc4755f88f21eca312cd34b3995efc4784faaa60ef1185f,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760727441045210613,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-tfnp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab280fd-be63-47d2-97e7-a8202afe7127,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:291e1ca6e8bd174cbb9010cf4e6251a9f4ac39f8b3f7a2b85a40fa6f7b57018f,PodSandboxId:ca32336983160cbc2d45292edb97d95f93a4ff8888812e2d5812bc956c59af2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760727434108294742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-hcp9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba9e4429-23a0-4dc3-9de8-f1dda1a00999,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\
",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:734d8668ed47e51ac15a16bd24aefbbfc0305ad4f5b0aafca5b9786a3566cc6c,PodSandboxId:63c26ce8d628411f2182dac2ce39ba9953d322bae458ca790a3f3aeb96c74420,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760727433313335762,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dnjlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02af884b-a081-45d9
-8441-d3dd959250c9,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f428ab4d86406223481dee5a7d0181511d01f82be789ee5af1d1f70a7ba07e4e,PodSandboxId:54cd9c5416dfdcacfee0a22319d60f275549df82361be16207804ba2e84ec363,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760727421079683992,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-768633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1bf8fe57a4f9194ad680b48c2ea630
5,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d402a702463f4ac7cac87444d0a29c6a2cd4877549704277257f0db0d25aa292,PodSandboxId:ac28f2b52ccae2d15aba60b8088454ebf57c24d4674fa5df03ab5aefc224bb22,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760727421096760503,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-add
ons-768633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4332981c9701bc1e3976cb5a0b5a939c,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c204072ef3685d5ea3f804b111bc2137fc8537426751dcfe7bcfed3ce93d97f0,PodSandboxId:988e36ba54a90331fe8ebddec48e4248720322b341310d73ac0e881a866291d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760727421027564325
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-768633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04349ca6a27cf95a96b4f4762a9355be,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be38e0026d7efcbd60974c798c272870d09eb6d8465fad637d3fa0592eb8d72f,PodSandboxId:14a4d146f0fb8dafa99225756e1340ea2907269f7d8908d5c9b99defa64c9233,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f
5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760727421030324946,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-768633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 923979fcf8065f3918003e2e54125f1b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0d422970-e6d9-4c4b-9375-5b9ef6fcdff1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 19:01:42 addons-768633 crio[823]: time="2025-10-17 19:01:42.122180254Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5bdeb60e-c749-4f9c-b033-e4bb587238fd name=/runtime.v1.RuntimeService/Version
	Oct 17 19:01:42 addons-768633 crio[823]: time="2025-10-17 19:01:42.122520514Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5bdeb60e-c749-4f9c-b033-e4bb587238fd name=/runtime.v1.RuntimeService/Version
	Oct 17 19:01:42 addons-768633 crio[823]: time="2025-10-17 19:01:42.124439265Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7b14b00f-9ce4-4be5-acb9-1dbe8cedf01d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 17 19:01:42 addons-768633 crio[823]: time="2025-10-17 19:01:42.125673930Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760727702125646741,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606631,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7b14b00f-9ce4-4be5-acb9-1dbe8cedf01d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 17 19:01:42 addons-768633 crio[823]: time="2025-10-17 19:01:42.126631324Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=87245ad6-e63c-4414-9301-97e2595ac4aa name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 19:01:42 addons-768633 crio[823]: time="2025-10-17 19:01:42.126867519Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=87245ad6-e63c-4414-9301-97e2595ac4aa name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 19:01:42 addons-768633 crio[823]: time="2025-10-17 19:01:42.127536971Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:be0730fa76904265c9a7b21afb22de2c3c93e72ae03261a213860188f5d5a184,PodSandboxId:0276be094ce5d4f0b9ee9381d96087ce47bb61e4264b7e47416fe257fdd974f8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1760727701886177415,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d498dc89-2p5zd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 239806dd-ed71-40e6-ad0b-c47680e70554,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a5c35431cb3aa9c012ac40b4303d6e06b83691261839b8fbb8b2482a8b5afd7,PodSandboxId:2ffeb38765fbfe6971a195f5cdc0badb42c7c9412619d1b44c325ba990e58b9a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760727558250219485,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb0421a5-e7d4-4c0e-905f-a1e7cda960ac,},Annotations:map[string]string{io.kubernete
s.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f1f71ac95e0f4dde98d22dfe3650b227d4f591dc58bc22c1e1cbd068706b7f,PodSandboxId:e9386740b714f8d273eff64ea0854ec6b467fc4becddebe3182aec3db87a8cf3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760727530968375311,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a170d263-cd6b-4c5a-af
f4-09a23f0f9b95,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d463186ed5ae13c7c91a2458461ffc2bee5d6cd00c961ed1ca39d0dffce3e15,PodSandboxId:9c83cb9acfc25f5e15774718253909e64ddad95a1ed03be625f8ab5fcf57f8c4,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760727516901653565,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-w4dgz,io.kubernetes.pod.namespace: ingress-nginx,io
.kubernetes.pod.uid: a2a5385f-e566-4891-9b27-c28588c44300,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:bbfacea85569a29aafc1ddd63163174113010c5205d0bf3f2b3e57beff6bfc69,PodSandboxId:a233b207f6342ba9fdd67546b33d6b97b56e07b1b705d5aa9268d45c07d70a0a,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc367
3693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760727500926967186,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-27ffl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d38f0d16-0260-4e54-bc7f-912e60472a5f,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cdc876eecb5fd0dbc62d4d250a56eba4e0707632cd04aacae565b2bafbcfb7a,PodSandboxId:79f06a0f20b3d4cb62a6fdf0c571cacda84d17a369847757bfd276d51030e2fd,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd
94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760727500792894166,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-rqvzx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1da0bd8c-7a8d-4d34-a729-fe786643d7eb,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321695e589a9e8cd72e7032f77ea9200d2c5adc3c9484f4549f29789fff87a59,PodSandboxId:aa1da063711746983dbb5d161b0546e7e27bc66bf441be243cb397a00a74f483,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor
-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760727493870324453,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-xnjmh,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: ab42516c-6fe5-434c-8040-ef81e89c7bc6,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e620c1650412cdfedcca71fa158f3926f3a95433e0d7fb94206b2b2d6dfddfe,PodSandboxId:240baee7059afd86b9f767a0b1ffbf69ba14ba10412ce3690d16704ffd71ef06,Metadata:&Con
tainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760727483322913482,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4217219b-58ba-497d-a00e-99b6ad7cfc85,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f01c81dd4bcf10f0ce4ee
1c236b69510dd362a85e7ecb92743ed61545fd63db0,PodSandboxId:c1fb35aba95a0dd5755bb91c1c191a025e2db747c38e370904f3a94b70fdde11,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760727445520746797,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a3fade3-84ac-4d8a-a78c-e90455760bfb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b781d9d138eae1827ea5662cd2e2255
722afb006859d6f506c72a5c95edd83,PodSandboxId:fe45ec4ab4bc979fdbc4755f88f21eca312cd34b3995efc4784faaa60ef1185f,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760727441045210613,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-tfnp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab280fd-be63-47d2-97e7-a8202afe7127,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:291e1ca6e8bd174cbb9010cf4e6251a9f4ac39f8b3f7a2b85a40fa6f7b57018f,PodSandboxId:ca32336983160cbc2d45292edb97d95f93a4ff8888812e2d5812bc956c59af2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760727434108294742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-hcp9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba9e4429-23a0-4dc3-9de8-f1dda1a00999,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\
",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:734d8668ed47e51ac15a16bd24aefbbfc0305ad4f5b0aafca5b9786a3566cc6c,PodSandboxId:63c26ce8d628411f2182dac2ce39ba9953d322bae458ca790a3f3aeb96c74420,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760727433313335762,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dnjlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02af884b-a081-45d9
-8441-d3dd959250c9,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f428ab4d86406223481dee5a7d0181511d01f82be789ee5af1d1f70a7ba07e4e,PodSandboxId:54cd9c5416dfdcacfee0a22319d60f275549df82361be16207804ba2e84ec363,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760727421079683992,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-768633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1bf8fe57a4f9194ad680b48c2ea630
5,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d402a702463f4ac7cac87444d0a29c6a2cd4877549704277257f0db0d25aa292,PodSandboxId:ac28f2b52ccae2d15aba60b8088454ebf57c24d4674fa5df03ab5aefc224bb22,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760727421096760503,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-add
ons-768633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4332981c9701bc1e3976cb5a0b5a939c,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c204072ef3685d5ea3f804b111bc2137fc8537426751dcfe7bcfed3ce93d97f0,PodSandboxId:988e36ba54a90331fe8ebddec48e4248720322b341310d73ac0e881a866291d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760727421027564325
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-768633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04349ca6a27cf95a96b4f4762a9355be,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be38e0026d7efcbd60974c798c272870d09eb6d8465fad637d3fa0592eb8d72f,PodSandboxId:14a4d146f0fb8dafa99225756e1340ea2907269f7d8908d5c9b99defa64c9233,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f
5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760727421030324946,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-768633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 923979fcf8065f3918003e2e54125f1b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=87245ad6-e63c-4414-9301-97e2595ac4aa name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 19:01:42 addons-768633 crio[823]: time="2025-10-17 19:01:42.167905192Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cb007a14-9677-40f5-8fb8-71c0cef1e5f6 name=/runtime.v1.RuntimeService/Version
	Oct 17 19:01:42 addons-768633 crio[823]: time="2025-10-17 19:01:42.167997633Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cb007a14-9677-40f5-8fb8-71c0cef1e5f6 name=/runtime.v1.RuntimeService/Version
	Oct 17 19:01:42 addons-768633 crio[823]: time="2025-10-17 19:01:42.170034582Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6b83f178-8555-4eb7-8e00-b5e63a853fb0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 17 19:01:42 addons-768633 crio[823]: time="2025-10-17 19:01:42.172487775Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760727702172461786,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606631,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6b83f178-8555-4eb7-8e00-b5e63a853fb0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 17 19:01:42 addons-768633 crio[823]: time="2025-10-17 19:01:42.173331386Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c2c126c1-f6e7-4318-aa0a-418d5310259e name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 19:01:42 addons-768633 crio[823]: time="2025-10-17 19:01:42.173543116Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c2c126c1-f6e7-4318-aa0a-418d5310259e name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 19:01:42 addons-768633 crio[823]: time="2025-10-17 19:01:42.174906005Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:be0730fa76904265c9a7b21afb22de2c3c93e72ae03261a213860188f5d5a184,PodSandboxId:0276be094ce5d4f0b9ee9381d96087ce47bb61e4264b7e47416fe257fdd974f8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1760727701886177415,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d498dc89-2p5zd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 239806dd-ed71-40e6-ad0b-c47680e70554,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a5c35431cb3aa9c012ac40b4303d6e06b83691261839b8fbb8b2482a8b5afd7,PodSandboxId:2ffeb38765fbfe6971a195f5cdc0badb42c7c9412619d1b44c325ba990e58b9a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760727558250219485,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb0421a5-e7d4-4c0e-905f-a1e7cda960ac,},Annotations:map[string]string{io.kubernete
s.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f1f71ac95e0f4dde98d22dfe3650b227d4f591dc58bc22c1e1cbd068706b7f,PodSandboxId:e9386740b714f8d273eff64ea0854ec6b467fc4becddebe3182aec3db87a8cf3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760727530968375311,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a170d263-cd6b-4c5a-af
f4-09a23f0f9b95,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d463186ed5ae13c7c91a2458461ffc2bee5d6cd00c961ed1ca39d0dffce3e15,PodSandboxId:9c83cb9acfc25f5e15774718253909e64ddad95a1ed03be625f8ab5fcf57f8c4,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760727516901653565,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-w4dgz,io.kubernetes.pod.namespace: ingress-nginx,io
.kubernetes.pod.uid: a2a5385f-e566-4891-9b27-c28588c44300,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:bbfacea85569a29aafc1ddd63163174113010c5205d0bf3f2b3e57beff6bfc69,PodSandboxId:a233b207f6342ba9fdd67546b33d6b97b56e07b1b705d5aa9268d45c07d70a0a,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc367
3693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760727500926967186,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-27ffl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d38f0d16-0260-4e54-bc7f-912e60472a5f,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cdc876eecb5fd0dbc62d4d250a56eba4e0707632cd04aacae565b2bafbcfb7a,PodSandboxId:79f06a0f20b3d4cb62a6fdf0c571cacda84d17a369847757bfd276d51030e2fd,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd
94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760727500792894166,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-rqvzx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1da0bd8c-7a8d-4d34-a729-fe786643d7eb,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321695e589a9e8cd72e7032f77ea9200d2c5adc3c9484f4549f29789fff87a59,PodSandboxId:aa1da063711746983dbb5d161b0546e7e27bc66bf441be243cb397a00a74f483,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor
-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760727493870324453,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-xnjmh,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: ab42516c-6fe5-434c-8040-ef81e89c7bc6,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e620c1650412cdfedcca71fa158f3926f3a95433e0d7fb94206b2b2d6dfddfe,PodSandboxId:240baee7059afd86b9f767a0b1ffbf69ba14ba10412ce3690d16704ffd71ef06,Metadata:&Con
tainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760727483322913482,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4217219b-58ba-497d-a00e-99b6ad7cfc85,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f01c81dd4bcf10f0ce4ee
1c236b69510dd362a85e7ecb92743ed61545fd63db0,PodSandboxId:c1fb35aba95a0dd5755bb91c1c191a025e2db747c38e370904f3a94b70fdde11,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760727445520746797,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a3fade3-84ac-4d8a-a78c-e90455760bfb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b781d9d138eae1827ea5662cd2e2255
722afb006859d6f506c72a5c95edd83,PodSandboxId:fe45ec4ab4bc979fdbc4755f88f21eca312cd34b3995efc4784faaa60ef1185f,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760727441045210613,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-tfnp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab280fd-be63-47d2-97e7-a8202afe7127,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:291e1ca6e8bd174cbb9010cf4e6251a9f4ac39f8b3f7a2b85a40fa6f7b57018f,PodSandboxId:ca32336983160cbc2d45292edb97d95f93a4ff8888812e2d5812bc956c59af2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760727434108294742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-hcp9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba9e4429-23a0-4dc3-9de8-f1dda1a00999,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\
",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:734d8668ed47e51ac15a16bd24aefbbfc0305ad4f5b0aafca5b9786a3566cc6c,PodSandboxId:63c26ce8d628411f2182dac2ce39ba9953d322bae458ca790a3f3aeb96c74420,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760727433313335762,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dnjlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02af884b-a081-45d9
-8441-d3dd959250c9,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f428ab4d86406223481dee5a7d0181511d01f82be789ee5af1d1f70a7ba07e4e,PodSandboxId:54cd9c5416dfdcacfee0a22319d60f275549df82361be16207804ba2e84ec363,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760727421079683992,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-768633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1bf8fe57a4f9194ad680b48c2ea630
5,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d402a702463f4ac7cac87444d0a29c6a2cd4877549704277257f0db0d25aa292,PodSandboxId:ac28f2b52ccae2d15aba60b8088454ebf57c24d4674fa5df03ab5aefc224bb22,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760727421096760503,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-add
ons-768633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4332981c9701bc1e3976cb5a0b5a939c,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c204072ef3685d5ea3f804b111bc2137fc8537426751dcfe7bcfed3ce93d97f0,PodSandboxId:988e36ba54a90331fe8ebddec48e4248720322b341310d73ac0e881a866291d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760727421027564325
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-768633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04349ca6a27cf95a96b4f4762a9355be,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be38e0026d7efcbd60974c798c272870d09eb6d8465fad637d3fa0592eb8d72f,PodSandboxId:14a4d146f0fb8dafa99225756e1340ea2907269f7d8908d5c9b99defa64c9233,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f
5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760727421030324946,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-768633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 923979fcf8065f3918003e2e54125f1b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c2c126c1-f6e7-4318-aa0a-418d5310259e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	be0730fa76904       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   0276be094ce5d       hello-world-app-5d498dc89-2p5zd
	9a5c35431cb3a       docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22                              2 minutes ago            Running             nginx                     0                   2ffeb38765fbf       nginx
	22f1f71ac95e0       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago            Running             busybox                   0                   e9386740b714f       busybox
	3d463186ed5ae       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd             3 minutes ago            Running             controller                0                   9c83cb9acfc25       ingress-nginx-controller-675c5ddd98-w4dgz
	bbfacea85569a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   3 minutes ago            Exited              patch                     0                   a233b207f6342       ingress-nginx-admission-patch-27ffl
	6cdc876eecb5f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   3 minutes ago            Exited              create                    0                   79f06a0f20b3d       ingress-nginx-admission-create-rqvzx
	321695e589a9e       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb            3 minutes ago            Running             gadget                    0                   aa1da06371174       gadget-xnjmh
	7e620c1650412       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               3 minutes ago            Running             minikube-ingress-dns      0                   240baee7059af       kube-ingress-dns-minikube
	f01c81dd4bcf1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago            Running             storage-provisioner       0                   c1fb35aba95a0       storage-provisioner
	37b781d9d138e       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago            Running             amd-gpu-device-plugin     0                   fe45ec4ab4bc9       amd-gpu-device-plugin-tfnp7
	291e1ca6e8bd1       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago            Running             coredns                   0                   ca32336983160       coredns-66bc5c9577-hcp9r
	734d8668ed47e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             4 minutes ago            Running             kube-proxy                0                   63c26ce8d6284       kube-proxy-dnjlc
	d402a702463f4       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             4 minutes ago            Running             kube-apiserver            0                   ac28f2b52ccae       kube-apiserver-addons-768633
	f428ab4d86406       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             4 minutes ago            Running             kube-scheduler            0                   54cd9c5416dfd       kube-scheduler-addons-768633
	be38e0026d7ef       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             4 minutes ago            Running             etcd                      0                   14a4d146f0fb8       etcd-addons-768633
	c204072ef3685       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             4 minutes ago            Running             kube-controller-manager   0                   988e36ba54a90       kube-controller-manager-addons-768633
	
	
	==> coredns [291e1ca6e8bd174cbb9010cf4e6251a9f4ac39f8b3f7a2b85a40fa6f7b57018f] <==
	[INFO] 10.244.0.8:47709 - 51875 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000154092s
	[INFO] 10.244.0.8:47709 - 55245 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000488162s
	[INFO] 10.244.0.8:47709 - 30241 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000825788s
	[INFO] 10.244.0.8:47709 - 35682 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000084731s
	[INFO] 10.244.0.8:47709 - 54107 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.001236053s
	[INFO] 10.244.0.8:47709 - 13774 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000102403s
	[INFO] 10.244.0.8:47709 - 40929 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000202479s
	[INFO] 10.244.0.8:41390 - 37552 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000158553s
	[INFO] 10.244.0.8:41390 - 37218 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000546773s
	[INFO] 10.244.0.8:38427 - 11868 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000126388s
	[INFO] 10.244.0.8:38427 - 12103 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000140623s
	[INFO] 10.244.0.8:53333 - 56502 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000102249s
	[INFO] 10.244.0.8:53333 - 56739 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000904284s
	[INFO] 10.244.0.8:43458 - 38347 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000101301s
	[INFO] 10.244.0.8:43458 - 37927 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000827121s
	[INFO] 10.244.0.23:41189 - 40887 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00036815s
	[INFO] 10.244.0.23:43359 - 41448 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00314539s
	[INFO] 10.244.0.23:44274 - 34184 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.001523397s
	[INFO] 10.244.0.23:36899 - 28018 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00017986s
	[INFO] 10.244.0.23:33538 - 2320 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000084904s
	[INFO] 10.244.0.23:52533 - 32900 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00025873s
	[INFO] 10.244.0.23:36996 - 34039 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003986359s
	[INFO] 10.244.0.23:46249 - 33442 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000949536s
	[INFO] 10.244.0.27:60970 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000391363s
	[INFO] 10.244.0.27:38810 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000145539s
	
	
	==> describe nodes <==
	Name:               addons-768633
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-768633
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=addons-768633
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T18_57_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-768633
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 18:57:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-768633
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:01:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 18:59:50 +0000   Fri, 17 Oct 2025 18:57:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 18:59:50 +0000   Fri, 17 Oct 2025 18:57:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 18:59:50 +0000   Fri, 17 Oct 2025 18:57:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 18:59:50 +0000   Fri, 17 Oct 2025 18:57:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.150
	  Hostname:    addons-768633
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008584Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008584Ki
	  pods:               110
	System Info:
	  Machine ID:                 a317fb0b22bc4b4c914b3b4610e8d8d3
	  System UUID:                a317fb0b-22bc-4b4c-914b-3b4610e8d8d3
	  Boot ID:                    683a5544-b23e-4817-8f24-4a40a21bb080
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m55s
	  default                     hello-world-app-5d498dc89-2p5zd              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  gadget                      gadget-xnjmh                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-w4dgz    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m20s
	  kube-system                 amd-gpu-device-plugin-tfnp7                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 coredns-66bc5c9577-hcp9r                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m30s
	  kube-system                 etcd-addons-768633                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m35s
	  kube-system                 kube-apiserver-addons-768633                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 kube-controller-manager-addons-768633        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 kube-proxy-dnjlc                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 kube-scheduler-addons-768633                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m28s  kube-proxy       
	  Normal  Starting                 4m35s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m35s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m35s  kubelet          Node addons-768633 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m35s  kubelet          Node addons-768633 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m35s  kubelet          Node addons-768633 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m34s  kubelet          Node addons-768633 status is now: NodeReady
	  Normal  RegisteredNode           4m31s  node-controller  Node addons-768633 event: Registered Node addons-768633 in Controller
	
	
	==> dmesg <==
	[  +0.301093] kauditd_printk_skb: 290 callbacks suppressed
	[  +0.394845] kauditd_printk_skb: 357 callbacks suppressed
	[ +15.336292] kauditd_printk_skb: 58 callbacks suppressed
	[  +6.150627] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.364073] kauditd_printk_skb: 32 callbacks suppressed
	[Oct17 18:58] kauditd_printk_skb: 32 callbacks suppressed
	[  +4.350760] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.192460] kauditd_printk_skb: 41 callbacks suppressed
	[  +0.952896] kauditd_printk_skb: 103 callbacks suppressed
	[  +0.916276] kauditd_printk_skb: 157 callbacks suppressed
	[  +5.539831] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.363276] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.000319] kauditd_printk_skb: 20 callbacks suppressed
	[Oct17 18:59] kauditd_printk_skb: 53 callbacks suppressed
	[  +6.128790] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.051191] kauditd_printk_skb: 38 callbacks suppressed
	[  +3.305041] kauditd_printk_skb: 99 callbacks suppressed
	[  +4.369886] kauditd_printk_skb: 68 callbacks suppressed
	[  +2.947691] kauditd_printk_skb: 162 callbacks suppressed
	[  +0.725545] kauditd_printk_skb: 115 callbacks suppressed
	[  +2.416204] kauditd_printk_skb: 90 callbacks suppressed
	[  +6.957459] kauditd_printk_skb: 10 callbacks suppressed
	[Oct17 19:00] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.460076] kauditd_printk_skb: 61 callbacks suppressed
	[Oct17 19:01] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [be38e0026d7efcbd60974c798c272870d09eb6d8465fad637d3fa0592eb8d72f] <==
	{"level":"info","ts":"2025-10-17T18:59:10.857536Z","caller":"traceutil/trace.go:172","msg":"trace[92862783] transaction","detail":"{read_only:false; response_revision:1311; number_of_response:1; }","duration":"389.216493ms","start":"2025-10-17T18:59:10.468312Z","end":"2025-10-17T18:59:10.857529Z","steps":["trace[92862783] 'process raft request'  (duration: 387.901621ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T18:59:10.857900Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-17T18:59:10.468290Z","time spent":"389.262186ms","remote":"127.0.0.1:55216","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":538,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1279 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:451 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"warn","ts":"2025-10-17T18:59:10.859495Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"325.463072ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-17T18:59:10.859521Z","caller":"traceutil/trace.go:172","msg":"trace[109349212] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1311; }","duration":"325.491799ms","start":"2025-10-17T18:59:10.534023Z","end":"2025-10-17T18:59:10.859515Z","steps":["trace[109349212] 'agreement among raft nodes before linearized reading'  (duration: 325.44805ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T18:59:10.859536Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-17T18:59:10.534009Z","time spent":"325.523669ms","remote":"127.0.0.1:55078","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-10-17T18:59:10.859604Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"326.09818ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-17T18:59:10.859616Z","caller":"traceutil/trace.go:172","msg":"trace[1288823566] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1311; }","duration":"326.110118ms","start":"2025-10-17T18:59:10.533502Z","end":"2025-10-17T18:59:10.859612Z","steps":["trace[1288823566] 'agreement among raft nodes before linearized reading'  (duration: 326.087044ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T18:59:10.859627Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-17T18:59:10.533482Z","time spent":"326.142214ms","remote":"127.0.0.1:55078","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-10-17T18:59:15.183613Z","caller":"traceutil/trace.go:172","msg":"trace[1500705579] transaction","detail":"{read_only:false; response_revision:1360; number_of_response:1; }","duration":"165.87898ms","start":"2025-10-17T18:59:15.017720Z","end":"2025-10-17T18:59:15.183599Z","steps":["trace[1500705579] 'process raft request'  (duration: 165.71953ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T18:59:16.881429Z","caller":"traceutil/trace.go:172","msg":"trace[1196216497] linearizableReadLoop","detail":"{readStateIndex:1421; appliedIndex:1421; }","duration":"235.314108ms","start":"2025-10-17T18:59:16.646099Z","end":"2025-10-17T18:59:16.881414Z","steps":["trace[1196216497] 'read index received'  (duration: 235.306299ms)","trace[1196216497] 'applied index is now lower than readState.Index'  (duration: 6.828µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-17T18:59:16.881607Z","caller":"traceutil/trace.go:172","msg":"trace[1262019054] transaction","detail":"{read_only:false; response_revision:1376; number_of_response:1; }","duration":"362.162249ms","start":"2025-10-17T18:59:16.519434Z","end":"2025-10-17T18:59:16.881596Z","steps":["trace[1262019054] 'process raft request'  (duration: 362.014966ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T18:59:16.881646Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"235.529497ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-17T18:59:16.881671Z","caller":"traceutil/trace.go:172","msg":"trace[1425992385] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1376; }","duration":"235.568685ms","start":"2025-10-17T18:59:16.646096Z","end":"2025-10-17T18:59:16.881665Z","steps":["trace[1425992385] 'agreement among raft nodes before linearized reading'  (duration: 235.502121ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T18:59:16.881697Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-17T18:59:16.519415Z","time spent":"362.227935ms","remote":"127.0.0.1:55078","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3934,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/headlamp/headlamp-6945c6f4d-qpqvb\" mod_revision:1366 > success:<request_put:<key:\"/registry/pods/headlamp/headlamp-6945c6f4d-qpqvb\" value_size:3878 >> failure:<request_range:<key:\"/registry/pods/headlamp/headlamp-6945c6f4d-qpqvb\" > >"}
	{"level":"warn","ts":"2025-10-17T18:59:16.881919Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.214362ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-17T18:59:16.881954Z","caller":"traceutil/trace.go:172","msg":"trace[213819119] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1376; }","duration":"111.236347ms","start":"2025-10-17T18:59:16.770698Z","end":"2025-10-17T18:59:16.881934Z","steps":["trace[213819119] 'agreement among raft nodes before linearized reading'  (duration: 111.199779ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T18:59:33.521398Z","caller":"traceutil/trace.go:172","msg":"trace[715852816] transaction","detail":"{read_only:false; response_revision:1540; number_of_response:1; }","duration":"158.968132ms","start":"2025-10-17T18:59:33.362417Z","end":"2025-10-17T18:59:33.521385Z","steps":["trace[715852816] 'process raft request'  (duration: 158.856908ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T18:59:34.615160Z","caller":"traceutil/trace.go:172","msg":"trace[2110277467] linearizableReadLoop","detail":"{readStateIndex:1604; appliedIndex:1604; }","duration":"176.450518ms","start":"2025-10-17T18:59:34.438686Z","end":"2025-10-17T18:59:34.615137Z","steps":["trace[2110277467] 'read index received'  (duration: 176.443362ms)","trace[2110277467] 'applied index is now lower than readState.Index'  (duration: 6.23µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-17T18:59:34.616056Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"177.35163ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/local-path\" limit:1 ","response":"range_response_count:1 size:964"}
	{"level":"info","ts":"2025-10-17T18:59:34.616088Z","caller":"traceutil/trace.go:172","msg":"trace[943706878] range","detail":"{range_begin:/registry/storageclasses/local-path; range_end:; response_count:1; response_revision:1547; }","duration":"177.399754ms","start":"2025-10-17T18:59:34.438681Z","end":"2025-10-17T18:59:34.616081Z","steps":["trace[943706878] 'agreement among raft nodes before linearized reading'  (duration: 176.528655ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T18:59:34.616314Z","caller":"traceutil/trace.go:172","msg":"trace[860492331] transaction","detail":"{read_only:false; response_revision:1548; number_of_response:1; }","duration":"204.260205ms","start":"2025-10-17T18:59:34.412044Z","end":"2025-10-17T18:59:34.616304Z","steps":["trace[860492331] 'process raft request'  (duration: 203.372779ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T18:59:35.160107Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"234.3105ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-17T18:59:35.160178Z","caller":"traceutil/trace.go:172","msg":"trace[98039458] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotclasses; range_end:; response_count:0; response_revision:1561; }","duration":"234.391529ms","start":"2025-10-17T18:59:34.925776Z","end":"2025-10-17T18:59:35.160167Z","steps":["trace[98039458] 'range keys from in-memory index tree'  (duration: 233.655127ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T18:59:35.160395Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"144.818551ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/local-path-storage\" limit:1 ","response":"range_response_count:1 size:621"}
	{"level":"info","ts":"2025-10-17T18:59:35.160525Z","caller":"traceutil/trace.go:172","msg":"trace[2048446117] range","detail":"{range_begin:/registry/namespaces/local-path-storage; range_end:; response_count:1; response_revision:1561; }","duration":"145.225287ms","start":"2025-10-17T18:59:35.015288Z","end":"2025-10-17T18:59:35.160513Z","steps":["trace[2048446117] 'range keys from in-memory index tree'  (duration: 144.375957ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:01:42 up 5 min,  0 users,  load average: 0.57, 1.45, 0.78
	Linux addons-768633 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [d402a702463f4ac7cac87444d0a29c6a2cd4877549704277257f0db0d25aa292] <==
	 > logger="UnhandledError"
	E1017 18:58:08.479372       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.133.131:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.133.131:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.133.131:443: connect: connection refused" logger="UnhandledError"
	E1017 18:58:08.484196       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.133.131:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.133.131:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.133.131:443: connect: connection refused" logger="UnhandledError"
	I1017 18:58:08.591879       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1017 18:58:56.861289       1 conn.go:339] Error on socket receive: read tcp 192.168.39.150:8443->192.168.39.1:52214: use of closed network connection
	I1017 18:59:06.391098       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.238.172"}
	I1017 18:59:12.508426       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1017 18:59:12.763487       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.192.185"}
	I1017 18:59:43.708132       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1017 18:59:50.886558       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1017 19:00:09.495709       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1017 19:00:12.198920       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1017 19:00:12.199206       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1017 19:00:12.349271       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1017 19:00:12.349355       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1017 19:00:12.356629       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1017 19:00:12.356689       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1017 19:00:12.377521       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1017 19:00:12.378069       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1017 19:00:12.441494       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1017 19:00:12.441543       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1017 19:00:13.356927       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1017 19:00:13.442851       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1017 19:00:13.459878       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1017 19:01:40.666306       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.102.165.213"}
	
	
	==> kube-controller-manager [c204072ef3685d5ea3f804b111bc2137fc8537426751dcfe7bcfed3ce93d97f0] <==
	E1017 19:00:23.218945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1017 19:00:23.710382       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1017 19:00:23.712355       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1017 19:00:30.681946       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1017 19:00:30.683867       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1017 19:00:33.191088       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1017 19:00:33.192447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1017 19:00:33.998163       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1017 19:00:33.999082       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1017 19:00:41.350312       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1017 19:00:41.350344       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:00:41.467443       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1017 19:00:41.467621       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1017 19:00:48.475436       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1017 19:00:48.476526       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1017 19:00:49.345184       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1017 19:00:49.346470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1017 19:00:58.699052       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1017 19:00:58.700197       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1017 19:01:24.557926       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1017 19:01:24.558983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1017 19:01:29.904250       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1017 19:01:29.905321       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1017 19:01:38.849254       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1017 19:01:38.850385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [734d8668ed47e51ac15a16bd24aefbbfc0305ad4f5b0aafca5b9786a3566cc6c] <==
	I1017 18:57:13.995879       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 18:57:14.099990       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 18:57:14.100253       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.150"]
	E1017 18:57:14.100545       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 18:57:14.326356       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1017 18:57:14.326426       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1017 18:57:14.326456       1 server_linux.go:132] "Using iptables Proxier"
	I1017 18:57:14.368711       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 18:57:14.376978       1 server.go:527] "Version info" version="v1.34.1"
	I1017 18:57:14.377097       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 18:57:14.409410       1 config.go:200] "Starting service config controller"
	I1017 18:57:14.409443       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 18:57:14.409459       1 config.go:106] "Starting endpoint slice config controller"
	I1017 18:57:14.409462       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 18:57:14.409471       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 18:57:14.409475       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 18:57:14.418514       1 config.go:309] "Starting node config controller"
	I1017 18:57:14.418551       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 18:57:14.418558       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 18:57:14.516978       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 18:57:14.517021       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 18:57:14.517055       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f428ab4d86406223481dee5a7d0181511d01f82be789ee5af1d1f70a7ba07e4e] <==
	E1017 18:57:04.206859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 18:57:04.211240       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1017 18:57:04.219463       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 18:57:04.219698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 18:57:04.219740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 18:57:04.219780       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 18:57:04.221671       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 18:57:04.221679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 18:57:05.256620       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 18:57:05.278562       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 18:57:05.289348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 18:57:05.326962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 18:57:05.337509       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 18:57:05.358487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 18:57:05.384955       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 18:57:05.391542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 18:57:05.401227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 18:57:05.414083       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 18:57:05.449499       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 18:57:05.463557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 18:57:05.504645       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 18:57:05.588239       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 18:57:05.609427       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 18:57:05.696980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1017 18:57:07.583407       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 19:00:15 addons-768633 kubelet[1498]: I1017 19:00:15.385512    1498 scope.go:117] "RemoveContainer" containerID="6bea104a42c1dcd761b9cc49d30cbb64cc124dc2056f36e2075c5d9b11068636"
	Oct 17 19:00:15 addons-768633 kubelet[1498]: I1017 19:00:15.386249    1498 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bea104a42c1dcd761b9cc49d30cbb64cc124dc2056f36e2075c5d9b11068636"} err="failed to get container status \"6bea104a42c1dcd761b9cc49d30cbb64cc124dc2056f36e2075c5d9b11068636\": rpc error: code = NotFound desc = could not find container \"6bea104a42c1dcd761b9cc49d30cbb64cc124dc2056f36e2075c5d9b11068636\": container with ID starting with 6bea104a42c1dcd761b9cc49d30cbb64cc124dc2056f36e2075c5d9b11068636 not found: ID does not exist"
	Oct 17 19:00:15 addons-768633 kubelet[1498]: I1017 19:00:15.386307    1498 scope.go:117] "RemoveContainer" containerID="3cfcabd0327164eca31fb864d1cc1dc1318a6879f77d7fa92e625d36a5dfec9f"
	Oct 17 19:00:15 addons-768633 kubelet[1498]: I1017 19:00:15.386747    1498 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cfcabd0327164eca31fb864d1cc1dc1318a6879f77d7fa92e625d36a5dfec9f"} err="failed to get container status \"3cfcabd0327164eca31fb864d1cc1dc1318a6879f77d7fa92e625d36a5dfec9f\": rpc error: code = NotFound desc = could not find container \"3cfcabd0327164eca31fb864d1cc1dc1318a6879f77d7fa92e625d36a5dfec9f\": container with ID starting with 3cfcabd0327164eca31fb864d1cc1dc1318a6879f77d7fa92e625d36a5dfec9f not found: ID does not exist"
	Oct 17 19:00:17 addons-768633 kubelet[1498]: E1017 19:00:17.889717    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760727617889186107  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 17 19:00:17 addons-768633 kubelet[1498]: E1017 19:00:17.889768    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760727617889186107  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 17 19:00:27 addons-768633 kubelet[1498]: E1017 19:00:27.894055    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760727627893123293  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 17 19:00:27 addons-768633 kubelet[1498]: E1017 19:00:27.894079    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760727627893123293  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 17 19:00:37 addons-768633 kubelet[1498]: E1017 19:00:37.897400    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760727637896887589  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 17 19:00:37 addons-768633 kubelet[1498]: E1017 19:00:37.897424    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760727637896887589  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 17 19:00:47 addons-768633 kubelet[1498]: E1017 19:00:47.901082    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760727647899613152  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 17 19:00:47 addons-768633 kubelet[1498]: E1017 19:00:47.901171    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760727647899613152  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 17 19:00:57 addons-768633 kubelet[1498]: E1017 19:00:57.905534    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760727657904720950  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 17 19:00:57 addons-768633 kubelet[1498]: E1017 19:00:57.905582    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760727657904720950  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 17 19:01:07 addons-768633 kubelet[1498]: E1017 19:01:07.908555    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760727667908098554  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 17 19:01:07 addons-768633 kubelet[1498]: E1017 19:01:07.908689    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760727667908098554  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 17 19:01:17 addons-768633 kubelet[1498]: E1017 19:01:17.912584    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760727677912004550  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 17 19:01:17 addons-768633 kubelet[1498]: E1017 19:01:17.912633    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760727677912004550  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 17 19:01:19 addons-768633 kubelet[1498]: I1017 19:01:19.300237    1498 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-tfnp7" secret="" err="secret \"gcp-auth\" not found"
	Oct 17 19:01:27 addons-768633 kubelet[1498]: E1017 19:01:27.917308    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760727687916255813  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 17 19:01:27 addons-768633 kubelet[1498]: E1017 19:01:27.917362    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760727687916255813  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 17 19:01:31 addons-768633 kubelet[1498]: I1017 19:01:31.300395    1498 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 17 19:01:37 addons-768633 kubelet[1498]: E1017 19:01:37.920491    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760727697919974656  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 17 19:01:37 addons-768633 kubelet[1498]: E1017 19:01:37.920522    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760727697919974656  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 17 19:01:40 addons-768633 kubelet[1498]: I1017 19:01:40.704463    1498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcbqc\" (UniqueName: \"kubernetes.io/projected/239806dd-ed71-40e6-ad0b-c47680e70554-kube-api-access-rcbqc\") pod \"hello-world-app-5d498dc89-2p5zd\" (UID: \"239806dd-ed71-40e6-ad0b-c47680e70554\") " pod="default/hello-world-app-5d498dc89-2p5zd"
	
	
	==> storage-provisioner [f01c81dd4bcf10f0ce4ee1c236b69510dd362a85e7ecb92743ed61545fd63db0] <==
	W1017 19:01:18.237394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:20.242935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:20.250510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:22.254382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:22.259604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:24.263449       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:24.273359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:26.277493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:26.283534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:28.287031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:28.295188       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:30.299289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:30.304343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:32.307556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:32.314350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:34.317743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:34.323531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:36.328009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:36.334424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:38.337248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:38.342535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:40.346009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:40.352523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:42.357050       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:01:42.362626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-768633 -n addons-768633
helpers_test.go:269: (dbg) Run:  kubectl --context addons-768633 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-rqvzx ingress-nginx-admission-patch-27ffl
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-768633 describe pod ingress-nginx-admission-create-rqvzx ingress-nginx-admission-patch-27ffl
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-768633 describe pod ingress-nginx-admission-create-rqvzx ingress-nginx-admission-patch-27ffl: exit status 1 (77.295523ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-rqvzx" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-27ffl" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-768633 describe pod ingress-nginx-admission-create-rqvzx ingress-nginx-admission-patch-27ffl: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-768633 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-768633 addons disable ingress-dns --alsologtostderr -v=1: (1.60984023s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-768633 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-768633 addons disable ingress --alsologtostderr -v=1: (7.84122866s)
--- FAIL: TestAddons/parallel/Ingress (160.70s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (1234.9s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1017 19:06:56.527194   79439 config.go:182] Loaded profile config "functional-016863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-016863 --alsologtostderr -v=8
E1017 19:08:47.742461   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:09:15.452406   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:13:47.748238   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:18:47.742046   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:20:10.815806   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-016863 --alsologtostderr -v=8: exit status 80 (13m58.120212621s)

                                                
                                                
-- stdout --
	* [functional-016863] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21753
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21753-75534/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-75534/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "functional-016863" primary control-plane node in "functional-016863" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:06:56.570682   85117 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:06:56.570809   85117 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:06:56.570820   85117 out.go:374] Setting ErrFile to fd 2...
	I1017 19:06:56.570826   85117 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:06:56.571105   85117 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-75534/.minikube/bin
	I1017 19:06:56.571578   85117 out.go:368] Setting JSON to false
	I1017 19:06:56.572426   85117 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6568,"bootTime":1760721449,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 19:06:56.572524   85117 start.go:141] virtualization: kvm guest
	I1017 19:06:56.574519   85117 out.go:179] * [functional-016863] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 19:06:56.575690   85117 notify.go:220] Checking for updates...
	I1017 19:06:56.575704   85117 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 19:06:56.577138   85117 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:06:56.578363   85117 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-75534/kubeconfig
	I1017 19:06:56.579669   85117 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-75534/.minikube
	I1017 19:06:56.581027   85117 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 19:06:56.582307   85117 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:06:56.583921   85117 config.go:182] Loaded profile config "functional-016863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:06:56.584037   85117 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:06:56.584492   85117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:06:56.584589   85117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:06:56.600478   85117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35877
	I1017 19:06:56.600991   85117 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:06:56.601750   85117 main.go:141] libmachine: Using API Version  1
	I1017 19:06:56.601786   85117 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:06:56.602161   85117 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:06:56.602390   85117 main.go:141] libmachine: (functional-016863) Calling .DriverName
	I1017 19:06:56.635697   85117 out.go:179] * Using the kvm2 driver based on existing profile
	I1017 19:06:56.637016   85117 start.go:305] selected driver: kvm2
	I1017 19:06:56.637040   85117 start.go:925] validating driver "kvm2" against &{Name:functional-016863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-016863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:06:56.637141   85117 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:06:56.637622   85117 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:06:56.637712   85117 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21753-75534/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1017 19:06:56.651574   85117 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1017 19:06:56.651619   85117 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21753-75534/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1017 19:06:56.665844   85117 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1017 19:06:56.666547   85117 cni.go:84] Creating CNI manager for ""
	I1017 19:06:56.666631   85117 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1017 19:06:56.666699   85117 start.go:349] cluster config:
	{Name:functional-016863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-016863 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:06:56.666812   85117 iso.go:125] acquiring lock: {Name:mk89d24a0bd9a0a8cf0564a4affa55e11eaff101 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:06:56.668638   85117 out.go:179] * Starting "functional-016863" primary control-plane node in "functional-016863" cluster
	I1017 19:06:56.669893   85117 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:06:56.669940   85117 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-75534/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 19:06:56.669951   85117 cache.go:58] Caching tarball of preloaded images
	I1017 19:06:56.670102   85117 preload.go:233] Found /home/jenkins/minikube-integration/21753-75534/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1017 19:06:56.670116   85117 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:06:56.670235   85117 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/config.json ...
	I1017 19:06:56.670445   85117 start.go:360] acquireMachinesLock for functional-016863: {Name:mke0c3abe726945d0c60793aa0bf26eb33df7fed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1017 19:06:56.670494   85117 start.go:364] duration metric: took 29.325µs to acquireMachinesLock for "functional-016863"
	I1017 19:06:56.670514   85117 start.go:96] Skipping create...Using existing machine configuration
	I1017 19:06:56.670524   85117 fix.go:54] fixHost starting: 
	I1017 19:06:56.670828   85117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:06:56.670877   85117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:06:56.683516   85117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42095
	I1017 19:06:56.683978   85117 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:06:56.684470   85117 main.go:141] libmachine: Using API Version  1
	I1017 19:06:56.684493   85117 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:06:56.684844   85117 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:06:56.685047   85117 main.go:141] libmachine: (functional-016863) Calling .DriverName
	I1017 19:06:56.685223   85117 main.go:141] libmachine: (functional-016863) Calling .GetState
	I1017 19:06:56.686913   85117 fix.go:112] recreateIfNeeded on functional-016863: state=Running err=<nil>
	W1017 19:06:56.686945   85117 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 19:06:56.688754   85117 out.go:252] * Updating the running kvm2 "functional-016863" VM ...
	I1017 19:06:56.688779   85117 machine.go:93] provisionDockerMachine start ...
	I1017 19:06:56.688795   85117 main.go:141] libmachine: (functional-016863) Calling .DriverName
	I1017 19:06:56.689021   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:06:56.691985   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:56.692501   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:06:56.692527   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:56.692713   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:06:56.692904   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:56.693142   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:56.693299   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:06:56.693474   85117 main.go:141] libmachine: Using SSH client type: native
	I1017 19:06:56.693724   85117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1017 19:06:56.693736   85117 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:06:56.799511   85117 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-016863
	
	I1017 19:06:56.799542   85117 main.go:141] libmachine: (functional-016863) Calling .GetMachineName
	I1017 19:06:56.799819   85117 buildroot.go:166] provisioning hostname "functional-016863"
	I1017 19:06:56.799862   85117 main.go:141] libmachine: (functional-016863) Calling .GetMachineName
	I1017 19:06:56.800154   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:06:56.803810   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:56.804342   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:06:56.804375   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:56.804593   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:06:56.804779   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:56.804950   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:56.805112   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:06:56.805279   85117 main.go:141] libmachine: Using SSH client type: native
	I1017 19:06:56.805490   85117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1017 19:06:56.805503   85117 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-016863 && echo "functional-016863" | sudo tee /etc/hostname
	I1017 19:06:56.929174   85117 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-016863
	
	I1017 19:06:56.929205   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:06:56.932429   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:56.932929   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:06:56.932954   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:56.933186   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:06:56.933423   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:56.933612   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:56.933826   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:06:56.934076   85117 main.go:141] libmachine: Using SSH client type: native
	I1017 19:06:56.934309   85117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1017 19:06:56.934326   85117 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-016863' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-016863/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-016863' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:06:57.042297   85117 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:06:57.042330   85117 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21753-75534/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-75534/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-75534/.minikube}
	I1017 19:06:57.042373   85117 buildroot.go:174] setting up certificates
	I1017 19:06:57.042382   85117 provision.go:84] configureAuth start
	I1017 19:06:57.042395   85117 main.go:141] libmachine: (functional-016863) Calling .GetMachineName
	I1017 19:06:57.042715   85117 main.go:141] libmachine: (functional-016863) Calling .GetIP
	I1017 19:06:57.045902   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:57.046469   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:06:57.046508   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:57.046778   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:06:57.049360   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:57.049857   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:06:57.049902   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:57.050076   85117 provision.go:143] copyHostCerts
	I1017 19:06:57.050123   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21753-75534/.minikube/ca.pem
	I1017 19:06:57.050183   85117 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-75534/.minikube/ca.pem, removing ...
	I1017 19:06:57.050205   85117 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-75534/.minikube/ca.pem
	I1017 19:06:57.050294   85117 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-75534/.minikube/ca.pem (1082 bytes)
	I1017 19:06:57.050425   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21753-75534/.minikube/cert.pem
	I1017 19:06:57.050463   85117 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-75534/.minikube/cert.pem, removing ...
	I1017 19:06:57.050473   85117 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-75534/.minikube/cert.pem
	I1017 19:06:57.050602   85117 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-75534/.minikube/cert.pem (1123 bytes)
	I1017 19:06:57.050772   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21753-75534/.minikube/key.pem
	I1017 19:06:57.050815   85117 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-75534/.minikube/key.pem, removing ...
	I1017 19:06:57.050825   85117 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-75534/.minikube/key.pem
	I1017 19:06:57.050881   85117 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-75534/.minikube/key.pem (1679 bytes)
	I1017 19:06:57.051013   85117 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-75534/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca-key.pem org=jenkins.functional-016863 san=[127.0.0.1 192.168.39.205 functional-016863 localhost minikube]
	I1017 19:06:57.269277   85117 provision.go:177] copyRemoteCerts
	I1017 19:06:57.269362   85117 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:06:57.269401   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:06:57.272458   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:57.272834   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:06:57.272866   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:57.273060   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:06:57.273266   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:57.273480   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:06:57.273640   85117 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/functional-016863/id_rsa Username:docker}
	I1017 19:06:57.362432   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1017 19:06:57.362511   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 19:06:57.412884   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1017 19:06:57.413107   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1017 19:06:57.450092   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1017 19:06:57.450212   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1017 19:06:57.486026   85117 provision.go:87] duration metric: took 443.605637ms to configureAuth
	I1017 19:06:57.486057   85117 buildroot.go:189] setting minikube options for container-runtime
	I1017 19:06:57.486228   85117 config.go:182] Loaded profile config "functional-016863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:06:57.486309   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:06:57.489476   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:57.489895   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:06:57.489928   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:57.490160   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:06:57.490354   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:57.490544   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:57.490703   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:06:57.490888   85117 main.go:141] libmachine: Using SSH client type: native
	I1017 19:06:57.491101   85117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1017 19:06:57.491114   85117 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:07:03.084984   85117 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:07:03.085021   85117 machine.go:96] duration metric: took 6.396234121s to provisionDockerMachine
	I1017 19:07:03.085042   85117 start.go:293] postStartSetup for "functional-016863" (driver="kvm2")
	I1017 19:07:03.085056   85117 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:07:03.085084   85117 main.go:141] libmachine: (functional-016863) Calling .DriverName
	I1017 19:07:03.085514   85117 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:07:03.085593   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:07:03.089211   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.089621   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:07:03.089655   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.089838   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:07:03.090055   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:07:03.090184   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:07:03.090354   85117 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/functional-016863/id_rsa Username:docker}
	I1017 19:07:03.173813   85117 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:07:03.179411   85117 command_runner.go:130] > NAME=Buildroot
	I1017 19:07:03.179437   85117 command_runner.go:130] > VERSION=2025.02-dirty
	I1017 19:07:03.179441   85117 command_runner.go:130] > ID=buildroot
	I1017 19:07:03.179446   85117 command_runner.go:130] > VERSION_ID=2025.02
	I1017 19:07:03.179452   85117 command_runner.go:130] > PRETTY_NAME="Buildroot 2025.02"
	I1017 19:07:03.179493   85117 info.go:137] Remote host: Buildroot 2025.02
	I1017 19:07:03.179508   85117 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-75534/.minikube/addons for local assets ...
	I1017 19:07:03.179595   85117 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-75534/.minikube/files for local assets ...
	I1017 19:07:03.179714   85117 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/ssl/certs/794392.pem -> 794392.pem in /etc/ssl/certs
	I1017 19:07:03.179729   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/ssl/certs/794392.pem -> /etc/ssl/certs/794392.pem
	I1017 19:07:03.179835   85117 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/test/nested/copy/79439/hosts -> hosts in /etc/test/nested/copy/79439
	I1017 19:07:03.179847   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/test/nested/copy/79439/hosts -> /etc/test/nested/copy/79439/hosts
	I1017 19:07:03.179893   85117 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/79439
	I1017 19:07:03.192128   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/ssl/certs/794392.pem --> /etc/ssl/certs/794392.pem (1708 bytes)
	I1017 19:07:03.223838   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/test/nested/copy/79439/hosts --> /etc/test/nested/copy/79439/hosts (40 bytes)
	I1017 19:07:03.313679   85117 start.go:296] duration metric: took 228.61978ms for postStartSetup
	I1017 19:07:03.313721   85117 fix.go:56] duration metric: took 6.643198174s for fixHost
	I1017 19:07:03.313742   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:07:03.317578   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.318077   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:07:03.318115   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.318367   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:07:03.318648   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:07:03.318838   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:07:03.319029   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:07:03.319295   85117 main.go:141] libmachine: Using SSH client type: native
	I1017 19:07:03.319597   85117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1017 19:07:03.319613   85117 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1017 19:07:03.479608   85117 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760728023.470011514
	
	I1017 19:07:03.479635   85117 fix.go:216] guest clock: 1760728023.470011514
	I1017 19:07:03.479642   85117 fix.go:229] Guest: 2025-10-17 19:07:03.470011514 +0000 UTC Remote: 2025-10-17 19:07:03.313724873 +0000 UTC m=+6.781586281 (delta=156.286641ms)
	I1017 19:07:03.479664   85117 fix.go:200] guest clock delta is within tolerance: 156.286641ms
	I1017 19:07:03.479671   85117 start.go:83] releasing machines lock for "functional-016863", held for 6.809163445s
	I1017 19:07:03.479692   85117 main.go:141] libmachine: (functional-016863) Calling .DriverName
	I1017 19:07:03.480016   85117 main.go:141] libmachine: (functional-016863) Calling .GetIP
	I1017 19:07:03.483255   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.483786   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:07:03.483830   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.484026   85117 main.go:141] libmachine: (functional-016863) Calling .DriverName
	I1017 19:07:03.484650   85117 main.go:141] libmachine: (functional-016863) Calling .DriverName
	I1017 19:07:03.484910   85117 main.go:141] libmachine: (functional-016863) Calling .DriverName
	I1017 19:07:03.485041   85117 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:07:03.485087   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:07:03.485146   85117 ssh_runner.go:195] Run: cat /version.json
	I1017 19:07:03.485170   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:07:03.488247   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.488613   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.488732   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:07:03.488760   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.488948   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:07:03.489117   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:07:03.489150   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:07:03.489166   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.489373   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:07:03.489440   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:07:03.489584   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:07:03.489660   85117 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/functional-016863/id_rsa Username:docker}
	I1017 19:07:03.489750   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:07:03.489896   85117 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/functional-016863/id_rsa Username:docker}
	I1017 19:07:03.669674   85117 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1017 19:07:03.669755   85117 command_runner.go:130] > {"iso_version": "v1.37.0-1760609724-21757", "kicbase_version": "v0.0.48-1760363564-21724", "minikube_version": "v1.37.0", "commit": "fd6729aa481bc45098452b0ed0ffbe097c29d1bb"}
	I1017 19:07:03.669885   85117 ssh_runner.go:195] Run: systemctl --version
	I1017 19:07:03.691813   85117 command_runner.go:130] > systemd 256 (256.7)
	I1017 19:07:03.691879   85117 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP -LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT -LIBARCHIVE
	I1017 19:07:03.691965   85117 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:07:03.942910   85117 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1017 19:07:03.963385   85117 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1017 19:07:03.963654   85117 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:07:03.963723   85117 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:07:04.004504   85117 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 19:07:04.004543   85117 start.go:495] detecting cgroup driver to use...
	I1017 19:07:04.004649   85117 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:07:04.048623   85117 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:07:04.093677   85117 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:07:04.093751   85117 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:07:04.125946   85117 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:07:04.177031   85117 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:07:04.556434   85117 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:07:04.871840   85117 docker.go:234] disabling docker service ...
	I1017 19:07:04.871920   85117 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:07:04.914455   85117 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:07:04.944209   85117 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:07:05.273173   85117 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:07:05.563772   85117 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:07:05.602259   85117 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:07:05.639391   85117 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1017 19:07:05.639452   85117 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:07:05.639509   85117 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:07:05.662293   85117 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 19:07:05.662360   85117 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:07:05.681766   85117 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:07:05.702415   85117 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:07:05.723309   85117 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:07:05.743334   85117 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:07:05.758794   85117 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:07:05.777348   85117 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:07:05.792297   85117 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:07:05.810337   85117 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1017 19:07:05.810427   85117 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:07:05.829378   85117 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:07:06.061473   85117 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:08:36.459335   85117 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.39776602s)
	I1017 19:08:36.459402   85117 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:08:36.459487   85117 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:08:36.466176   85117 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1017 19:08:36.466208   85117 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1017 19:08:36.466216   85117 command_runner.go:130] > Device: 0,23	Inode: 1978        Links: 1
	I1017 19:08:36.466222   85117 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1017 19:08:36.466229   85117 command_runner.go:130] > Access: 2025-10-17 19:08:36.354383352 +0000
	I1017 19:08:36.466239   85117 command_runner.go:130] > Modify: 2025-10-17 19:08:36.274379788 +0000
	I1017 19:08:36.466245   85117 command_runner.go:130] > Change: 2025-10-17 19:08:36.274379788 +0000
	I1017 19:08:36.466267   85117 command_runner.go:130] >  Birth: 2025-10-17 19:08:36.274379788 +0000
	I1017 19:08:36.466319   85117 start.go:563] Will wait 60s for crictl version
	I1017 19:08:36.466390   85117 ssh_runner.go:195] Run: which crictl
	I1017 19:08:36.470951   85117 command_runner.go:130] > /usr/bin/crictl
	I1017 19:08:36.471037   85117 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1017 19:08:36.516077   85117 command_runner.go:130] > Version:  0.1.0
	I1017 19:08:36.516101   85117 command_runner.go:130] > RuntimeName:  cri-o
	I1017 19:08:36.516106   85117 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1017 19:08:36.516111   85117 command_runner.go:130] > RuntimeApiVersion:  v1
	I1017 19:08:36.516132   85117 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1017 19:08:36.516223   85117 ssh_runner.go:195] Run: crio --version
	I1017 19:08:36.548879   85117 command_runner.go:130] > crio version 1.29.1
	I1017 19:08:36.548904   85117 command_runner.go:130] > Version:        1.29.1
	I1017 19:08:36.548909   85117 command_runner.go:130] > GitCommit:      unknown
	I1017 19:08:36.548925   85117 command_runner.go:130] > GitCommitDate:  unknown
	I1017 19:08:36.548929   85117 command_runner.go:130] > GitTreeState:   clean
	I1017 19:08:36.548935   85117 command_runner.go:130] > BuildDate:      2025-10-16T13:23:57Z
	I1017 19:08:36.548939   85117 command_runner.go:130] > GoVersion:      go1.23.4
	I1017 19:08:36.548942   85117 command_runner.go:130] > Compiler:       gc
	I1017 19:08:36.548947   85117 command_runner.go:130] > Platform:       linux/amd64
	I1017 19:08:36.548951   85117 command_runner.go:130] > Linkmode:       dynamic
	I1017 19:08:36.548955   85117 command_runner.go:130] > BuildTags:      
	I1017 19:08:36.548959   85117 command_runner.go:130] >   containers_image_ostree_stub
	I1017 19:08:36.548963   85117 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1017 19:08:36.548966   85117 command_runner.go:130] >   btrfs_noversion
	I1017 19:08:36.548970   85117 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1017 19:08:36.548974   85117 command_runner.go:130] >   libdm_no_deferred_remove
	I1017 19:08:36.548978   85117 command_runner.go:130] >   seccomp
	I1017 19:08:36.548982   85117 command_runner.go:130] > LDFlags:          unknown
	I1017 19:08:36.549001   85117 command_runner.go:130] > SeccompEnabled:   true
	I1017 19:08:36.549005   85117 command_runner.go:130] > AppArmorEnabled:  false
	I1017 19:08:36.549081   85117 ssh_runner.go:195] Run: crio --version
	I1017 19:08:36.579072   85117 command_runner.go:130] > crio version 1.29.1
	I1017 19:08:36.579097   85117 command_runner.go:130] > Version:        1.29.1
	I1017 19:08:36.579102   85117 command_runner.go:130] > GitCommit:      unknown
	I1017 19:08:36.579106   85117 command_runner.go:130] > GitCommitDate:  unknown
	I1017 19:08:36.579109   85117 command_runner.go:130] > GitTreeState:   clean
	I1017 19:08:36.579114   85117 command_runner.go:130] > BuildDate:      2025-10-16T13:23:57Z
	I1017 19:08:36.579118   85117 command_runner.go:130] > GoVersion:      go1.23.4
	I1017 19:08:36.579122   85117 command_runner.go:130] > Compiler:       gc
	I1017 19:08:36.579126   85117 command_runner.go:130] > Platform:       linux/amd64
	I1017 19:08:36.579129   85117 command_runner.go:130] > Linkmode:       dynamic
	I1017 19:08:36.579133   85117 command_runner.go:130] > BuildTags:      
	I1017 19:08:36.579137   85117 command_runner.go:130] >   containers_image_ostree_stub
	I1017 19:08:36.579141   85117 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1017 19:08:36.579144   85117 command_runner.go:130] >   btrfs_noversion
	I1017 19:08:36.579148   85117 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1017 19:08:36.579152   85117 command_runner.go:130] >   libdm_no_deferred_remove
	I1017 19:08:36.579156   85117 command_runner.go:130] >   seccomp
	I1017 19:08:36.579159   85117 command_runner.go:130] > LDFlags:          unknown
	I1017 19:08:36.579162   85117 command_runner.go:130] > SeccompEnabled:   true
	I1017 19:08:36.579166   85117 command_runner.go:130] > AppArmorEnabled:  false
	I1017 19:08:36.581921   85117 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1017 19:08:36.583156   85117 main.go:141] libmachine: (functional-016863) Calling .GetIP
	I1017 19:08:36.586303   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:08:36.586761   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:08:36.586791   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:08:36.587045   85117 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1017 19:08:36.592096   85117 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1017 19:08:36.592194   85117 kubeadm.go:883] updating cluster {Name:functional-016863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.1 ClusterName:functional-016863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 19:08:36.592323   85117 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:08:36.592384   85117 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:08:36.644213   85117 command_runner.go:130] > {
	I1017 19:08:36.644235   85117 command_runner.go:130] >   "images": [
	I1017 19:08:36.644239   85117 command_runner.go:130] >     {
	I1017 19:08:36.644246   85117 command_runner.go:130] >       "id": "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1017 19:08:36.644251   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.644257   85117 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1017 19:08:36.644260   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644265   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.644287   85117 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1017 19:08:36.644298   85117 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1017 19:08:36.644304   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644310   85117 command_runner.go:130] >       "size": "109379124",
	I1017 19:08:36.644319   85117 command_runner.go:130] >       "uid": null,
	I1017 19:08:36.644328   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.644357   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.644368   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.644379   85117 command_runner.go:130] >     },
	I1017 19:08:36.644384   85117 command_runner.go:130] >     {
	I1017 19:08:36.644397   85117 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1017 19:08:36.644403   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.644412   85117 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1017 19:08:36.644418   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644429   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.644441   85117 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1017 19:08:36.644455   85117 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1017 19:08:36.644463   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644489   85117 command_runner.go:130] >       "size": "31470524",
	I1017 19:08:36.644500   85117 command_runner.go:130] >       "uid": null,
	I1017 19:08:36.644506   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.644517   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.644524   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.644532   85117 command_runner.go:130] >     },
	I1017 19:08:36.644537   85117 command_runner.go:130] >     {
	I1017 19:08:36.644546   85117 command_runner.go:130] >       "id": "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1017 19:08:36.644570   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.644577   85117 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1017 19:08:36.644586   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644592   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.644602   85117 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1017 19:08:36.644610   85117 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1017 19:08:36.644616   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644620   85117 command_runner.go:130] >       "size": "76103547",
	I1017 19:08:36.644623   85117 command_runner.go:130] >       "uid": null,
	I1017 19:08:36.644628   85117 command_runner.go:130] >       "username": "nonroot",
	I1017 19:08:36.644634   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.644638   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.644644   85117 command_runner.go:130] >     },
	I1017 19:08:36.644655   85117 command_runner.go:130] >     {
	I1017 19:08:36.644664   85117 command_runner.go:130] >       "id": "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1017 19:08:36.644668   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.644675   85117 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1017 19:08:36.644678   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644685   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.644692   85117 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1017 19:08:36.644707   85117 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1017 19:08:36.644713   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644716   85117 command_runner.go:130] >       "size": "195976448",
	I1017 19:08:36.644720   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.644726   85117 command_runner.go:130] >         "value": "0"
	I1017 19:08:36.644729   85117 command_runner.go:130] >       },
	I1017 19:08:36.644733   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.644737   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.644741   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.644744   85117 command_runner.go:130] >     },
	I1017 19:08:36.644747   85117 command_runner.go:130] >     {
	I1017 19:08:36.644753   85117 command_runner.go:130] >       "id": "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1017 19:08:36.644760   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.644764   85117 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1017 19:08:36.644767   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644772   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.644781   85117 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1017 19:08:36.644788   85117 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1017 19:08:36.644794   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644798   85117 command_runner.go:130] >       "size": "89046001",
	I1017 19:08:36.644802   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.644806   85117 command_runner.go:130] >         "value": "0"
	I1017 19:08:36.644810   85117 command_runner.go:130] >       },
	I1017 19:08:36.644813   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.644819   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.644822   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.644830   85117 command_runner.go:130] >     },
	I1017 19:08:36.644836   85117 command_runner.go:130] >     {
	I1017 19:08:36.644842   85117 command_runner.go:130] >       "id": "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1017 19:08:36.644845   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.644850   85117 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1017 19:08:36.644856   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644860   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.644868   85117 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1017 19:08:36.644877   85117 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1017 19:08:36.644880   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644884   85117 command_runner.go:130] >       "size": "76004181",
	I1017 19:08:36.644888   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.644892   85117 command_runner.go:130] >         "value": "0"
	I1017 19:08:36.644895   85117 command_runner.go:130] >       },
	I1017 19:08:36.644899   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.644902   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.644908   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.644911   85117 command_runner.go:130] >     },
	I1017 19:08:36.644914   85117 command_runner.go:130] >     {
	I1017 19:08:36.644920   85117 command_runner.go:130] >       "id": "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1017 19:08:36.644924   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.644928   85117 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1017 19:08:36.644932   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644944   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.644951   85117 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1017 19:08:36.644958   85117 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1017 19:08:36.644961   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644964   85117 command_runner.go:130] >       "size": "73138073",
	I1017 19:08:36.644968   85117 command_runner.go:130] >       "uid": null,
	I1017 19:08:36.644972   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.644975   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.644979   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.644982   85117 command_runner.go:130] >     },
	I1017 19:08:36.644991   85117 command_runner.go:130] >     {
	I1017 19:08:36.644999   85117 command_runner.go:130] >       "id": "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1017 19:08:36.645003   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.645010   85117 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1017 19:08:36.645013   85117 command_runner.go:130] >       ],
	I1017 19:08:36.645017   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.645041   85117 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1017 19:08:36.645052   85117 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1017 19:08:36.645055   85117 command_runner.go:130] >       ],
	I1017 19:08:36.645059   85117 command_runner.go:130] >       "size": "53844823",
	I1017 19:08:36.645062   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.645066   85117 command_runner.go:130] >         "value": "0"
	I1017 19:08:36.645068   85117 command_runner.go:130] >       },
	I1017 19:08:36.645072   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.645075   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.645079   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.645081   85117 command_runner.go:130] >     },
	I1017 19:08:36.645084   85117 command_runner.go:130] >     {
	I1017 19:08:36.645090   85117 command_runner.go:130] >       "id": "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1017 19:08:36.645093   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.645097   85117 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1017 19:08:36.645100   85117 command_runner.go:130] >       ],
	I1017 19:08:36.645104   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.645110   85117 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1017 19:08:36.645116   85117 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1017 19:08:36.645120   85117 command_runner.go:130] >       ],
	I1017 19:08:36.645123   85117 command_runner.go:130] >       "size": "742092",
	I1017 19:08:36.645126   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.645129   85117 command_runner.go:130] >         "value": "65535"
	I1017 19:08:36.645132   85117 command_runner.go:130] >       },
	I1017 19:08:36.645136   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.645143   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.645147   85117 command_runner.go:130] >       "pinned": true
	I1017 19:08:36.645154   85117 command_runner.go:130] >     }
	I1017 19:08:36.645157   85117 command_runner.go:130] >   ]
	I1017 19:08:36.645160   85117 command_runner.go:130] > }
	I1017 19:08:36.645398   85117 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:08:36.645415   85117 crio.go:433] Images already preloaded, skipping extraction
	I1017 19:08:36.645478   85117 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:08:36.684800   85117 command_runner.go:130] > {
	I1017 19:08:36.684832   85117 command_runner.go:130] >   "images": [
	I1017 19:08:36.684855   85117 command_runner.go:130] >     {
	I1017 19:08:36.684869   85117 command_runner.go:130] >       "id": "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1017 19:08:36.684877   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.684887   85117 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1017 19:08:36.684892   85117 command_runner.go:130] >       ],
	I1017 19:08:36.684896   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.684909   85117 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1017 19:08:36.684916   85117 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1017 19:08:36.684919   85117 command_runner.go:130] >       ],
	I1017 19:08:36.684923   85117 command_runner.go:130] >       "size": "109379124",
	I1017 19:08:36.684927   85117 command_runner.go:130] >       "uid": null,
	I1017 19:08:36.684930   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.684935   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.684938   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.684942   85117 command_runner.go:130] >     },
	I1017 19:08:36.684945   85117 command_runner.go:130] >     {
	I1017 19:08:36.684950   85117 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1017 19:08:36.684955   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.684960   85117 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1017 19:08:36.684973   85117 command_runner.go:130] >       ],
	I1017 19:08:36.684980   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.684994   85117 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1017 19:08:36.685002   85117 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1017 19:08:36.685005   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685013   85117 command_runner.go:130] >       "size": "31470524",
	I1017 19:08:36.685018   85117 command_runner.go:130] >       "uid": null,
	I1017 19:08:36.685021   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.685025   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.685029   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.685032   85117 command_runner.go:130] >     },
	I1017 19:08:36.685035   85117 command_runner.go:130] >     {
	I1017 19:08:36.685041   85117 command_runner.go:130] >       "id": "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1017 19:08:36.685045   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.685055   85117 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1017 19:08:36.685061   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685064   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.685072   85117 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1017 19:08:36.685081   85117 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1017 19:08:36.685084   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685088   85117 command_runner.go:130] >       "size": "76103547",
	I1017 19:08:36.685092   85117 command_runner.go:130] >       "uid": null,
	I1017 19:08:36.685095   85117 command_runner.go:130] >       "username": "nonroot",
	I1017 19:08:36.685098   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.685105   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.685108   85117 command_runner.go:130] >     },
	I1017 19:08:36.685111   85117 command_runner.go:130] >     {
	I1017 19:08:36.685116   85117 command_runner.go:130] >       "id": "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1017 19:08:36.685121   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.685125   85117 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1017 19:08:36.685128   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685132   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.685140   85117 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1017 19:08:36.685152   85117 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1017 19:08:36.685158   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685162   85117 command_runner.go:130] >       "size": "195976448",
	I1017 19:08:36.685165   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.685169   85117 command_runner.go:130] >         "value": "0"
	I1017 19:08:36.685172   85117 command_runner.go:130] >       },
	I1017 19:08:36.685176   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.685179   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.685183   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.685186   85117 command_runner.go:130] >     },
	I1017 19:08:36.685195   85117 command_runner.go:130] >     {
	I1017 19:08:36.685202   85117 command_runner.go:130] >       "id": "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1017 19:08:36.685205   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.685209   85117 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1017 19:08:36.685217   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685224   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.685230   85117 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1017 19:08:36.685243   85117 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1017 19:08:36.685249   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685252   85117 command_runner.go:130] >       "size": "89046001",
	I1017 19:08:36.685256   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.685259   85117 command_runner.go:130] >         "value": "0"
	I1017 19:08:36.685263   85117 command_runner.go:130] >       },
	I1017 19:08:36.685266   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.685270   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.685274   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.685277   85117 command_runner.go:130] >     },
	I1017 19:08:36.685280   85117 command_runner.go:130] >     {
	I1017 19:08:36.685292   85117 command_runner.go:130] >       "id": "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1017 19:08:36.685301   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.685310   85117 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1017 19:08:36.685322   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685332   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.685344   85117 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1017 19:08:36.685361   85117 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1017 19:08:36.685371   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685378   85117 command_runner.go:130] >       "size": "76004181",
	I1017 19:08:36.685388   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.685394   85117 command_runner.go:130] >         "value": "0"
	I1017 19:08:36.685403   85117 command_runner.go:130] >       },
	I1017 19:08:36.685407   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.685414   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.685418   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.685421   85117 command_runner.go:130] >     },
	I1017 19:08:36.685424   85117 command_runner.go:130] >     {
	I1017 19:08:36.685430   85117 command_runner.go:130] >       "id": "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1017 19:08:36.685437   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.685448   85117 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1017 19:08:36.685454   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685457   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.685464   85117 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1017 19:08:36.685473   85117 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1017 19:08:36.685476   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685483   85117 command_runner.go:130] >       "size": "73138073",
	I1017 19:08:36.685487   85117 command_runner.go:130] >       "uid": null,
	I1017 19:08:36.685491   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.685495   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.685498   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.685502   85117 command_runner.go:130] >     },
	I1017 19:08:36.685505   85117 command_runner.go:130] >     {
	I1017 19:08:36.685511   85117 command_runner.go:130] >       "id": "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1017 19:08:36.685517   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.685522   85117 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1017 19:08:36.685528   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685531   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.685577   85117 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1017 19:08:36.685591   85117 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1017 19:08:36.685594   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685598   85117 command_runner.go:130] >       "size": "53844823",
	I1017 19:08:36.685601   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.685604   85117 command_runner.go:130] >         "value": "0"
	I1017 19:08:36.685607   85117 command_runner.go:130] >       },
	I1017 19:08:36.685611   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.685614   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.685618   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.685621   85117 command_runner.go:130] >     },
	I1017 19:08:36.685624   85117 command_runner.go:130] >     {
	I1017 19:08:36.685629   85117 command_runner.go:130] >       "id": "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1017 19:08:36.685638   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.685642   85117 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1017 19:08:36.685651   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685658   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.685664   85117 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1017 19:08:36.685673   85117 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1017 19:08:36.685677   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685680   85117 command_runner.go:130] >       "size": "742092",
	I1017 19:08:36.685684   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.685688   85117 command_runner.go:130] >         "value": "65535"
	I1017 19:08:36.685691   85117 command_runner.go:130] >       },
	I1017 19:08:36.685697   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.685700   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.685703   85117 command_runner.go:130] >       "pinned": true
	I1017 19:08:36.685706   85117 command_runner.go:130] >     }
	I1017 19:08:36.685711   85117 command_runner.go:130] >   ]
	I1017 19:08:36.685714   85117 command_runner.go:130] > }
	I1017 19:08:36.685822   85117 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:08:36.685834   85117 cache_images.go:85] Images are preloaded, skipping loading
	I1017 19:08:36.685842   85117 kubeadm.go:934] updating node { 192.168.39.205 8441 v1.34.1 crio true true} ...
	I1017 19:08:36.685955   85117 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-016863 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.205
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-016863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:08:36.686028   85117 ssh_runner.go:195] Run: crio config
	I1017 19:08:36.721698   85117 command_runner.go:130] ! time="2025-10-17 19:08:36.711815300Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1017 19:08:36.726934   85117 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1017 19:08:36.733071   85117 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1017 19:08:36.733099   85117 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1017 19:08:36.733109   85117 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1017 19:08:36.733113   85117 command_runner.go:130] > #
	I1017 19:08:36.733123   85117 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1017 19:08:36.733131   85117 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1017 19:08:36.733140   85117 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1017 19:08:36.733156   85117 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1017 19:08:36.733165   85117 command_runner.go:130] > # reload'.
	I1017 19:08:36.733177   85117 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1017 19:08:36.733189   85117 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1017 19:08:36.733199   85117 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1017 19:08:36.733209   85117 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1017 19:08:36.733222   85117 command_runner.go:130] > [crio]
	I1017 19:08:36.733230   85117 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1017 19:08:36.733234   85117 command_runner.go:130] > # containers images, in this directory.
	I1017 19:08:36.733241   85117 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1017 19:08:36.733256   85117 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1017 19:08:36.733263   85117 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1017 19:08:36.733270   85117 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1017 19:08:36.733277   85117 command_runner.go:130] > # imagestore = ""
	I1017 19:08:36.733283   85117 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1017 19:08:36.733291   85117 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1017 19:08:36.733296   85117 command_runner.go:130] > # storage_driver = "overlay"
	I1017 19:08:36.733307   85117 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1017 19:08:36.733320   85117 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1017 19:08:36.733327   85117 command_runner.go:130] > storage_option = [
	I1017 19:08:36.733337   85117 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1017 19:08:36.733342   85117 command_runner.go:130] > ]
	I1017 19:08:36.733354   85117 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1017 19:08:36.733363   85117 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1017 19:08:36.733368   85117 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1017 19:08:36.733374   85117 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1017 19:08:36.733380   85117 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1017 19:08:36.733387   85117 command_runner.go:130] > # always happen on a node reboot
	I1017 19:08:36.733391   85117 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1017 19:08:36.733411   85117 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1017 19:08:36.733424   85117 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1017 19:08:36.733432   85117 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1017 19:08:36.733443   85117 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1017 19:08:36.733456   85117 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1017 19:08:36.733470   85117 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1017 19:08:36.733480   85117 command_runner.go:130] > # internal_wipe = true
	I1017 19:08:36.733489   85117 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1017 19:08:36.733497   85117 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1017 19:08:36.733504   85117 command_runner.go:130] > # internal_repair = false
	I1017 19:08:36.733522   85117 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1017 19:08:36.733534   85117 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1017 19:08:36.733544   85117 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1017 19:08:36.733565   85117 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1017 19:08:36.733582   85117 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1017 19:08:36.733590   85117 command_runner.go:130] > [crio.api]
	I1017 19:08:36.733598   85117 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1017 19:08:36.733608   85117 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1017 19:08:36.733616   85117 command_runner.go:130] > # IP address on which the stream server will listen.
	I1017 19:08:36.733626   85117 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1017 19:08:36.733636   85117 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1017 19:08:36.733647   85117 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1017 19:08:36.733653   85117 command_runner.go:130] > # stream_port = "0"
	I1017 19:08:36.733665   85117 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1017 19:08:36.733671   85117 command_runner.go:130] > # stream_enable_tls = false
	I1017 19:08:36.733683   85117 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1017 19:08:36.733692   85117 command_runner.go:130] > # stream_idle_timeout = ""
	I1017 19:08:36.733699   85117 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1017 19:08:36.733709   85117 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1017 19:08:36.733719   85117 command_runner.go:130] > # minutes.
	I1017 19:08:36.733729   85117 command_runner.go:130] > # stream_tls_cert = ""
	I1017 19:08:36.733738   85117 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1017 19:08:36.733749   85117 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1017 19:08:36.733755   85117 command_runner.go:130] > # stream_tls_key = ""
	I1017 19:08:36.733767   85117 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1017 19:08:36.733777   85117 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1017 19:08:36.733807   85117 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1017 19:08:36.733817   85117 command_runner.go:130] > # stream_tls_ca = ""
	I1017 19:08:36.733828   85117 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1017 19:08:36.733839   85117 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1017 19:08:36.733850   85117 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1017 19:08:36.733860   85117 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1017 19:08:36.733870   85117 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1017 19:08:36.733888   85117 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1017 19:08:36.733894   85117 command_runner.go:130] > [crio.runtime]
	I1017 19:08:36.733902   85117 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1017 19:08:36.733914   85117 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1017 19:08:36.733923   85117 command_runner.go:130] > # "nofile=1024:2048"
	I1017 19:08:36.733936   85117 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1017 19:08:36.733945   85117 command_runner.go:130] > # default_ulimits = [
	I1017 19:08:36.733950   85117 command_runner.go:130] > # ]
	I1017 19:08:36.733961   85117 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1017 19:08:36.733966   85117 command_runner.go:130] > # no_pivot = false
	I1017 19:08:36.733974   85117 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1017 19:08:36.733984   85117 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1017 19:08:36.733990   85117 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1017 19:08:36.734005   85117 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1017 19:08:36.734017   85117 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1017 19:08:36.734041   85117 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1017 19:08:36.734050   85117 command_runner.go:130] > conmon = "/usr/bin/conmon"
	I1017 19:08:36.734057   85117 command_runner.go:130] > # Cgroup setting for conmon
	I1017 19:08:36.734070   85117 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1017 19:08:36.734079   85117 command_runner.go:130] > conmon_cgroup = "pod"
	I1017 19:08:36.734085   85117 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1017 19:08:36.734096   85117 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1017 19:08:36.734105   85117 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1017 19:08:36.734115   85117 command_runner.go:130] > conmon_env = [
	I1017 19:08:36.734124   85117 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1017 19:08:36.734133   85117 command_runner.go:130] > ]
	I1017 19:08:36.734142   85117 command_runner.go:130] > # Additional environment variables to set for all the
	I1017 19:08:36.734152   85117 command_runner.go:130] > # containers. These are overridden if set in the
	I1017 19:08:36.734161   85117 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1017 19:08:36.734170   85117 command_runner.go:130] > # default_env = [
	I1017 19:08:36.734175   85117 command_runner.go:130] > # ]
	I1017 19:08:36.734186   85117 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1017 19:08:36.734193   85117 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1017 19:08:36.734374   85117 command_runner.go:130] > # selinux = false
	I1017 19:08:36.734484   85117 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1017 19:08:36.734495   85117 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1017 19:08:36.734505   85117 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1017 19:08:36.734516   85117 command_runner.go:130] > # seccomp_profile = ""
	I1017 19:08:36.734531   85117 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1017 19:08:36.734543   85117 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1017 19:08:36.734567   85117 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1017 19:08:36.734585   85117 command_runner.go:130] > # which might increase security.
	I1017 19:08:36.734593   85117 command_runner.go:130] > # This option is currently deprecated,
	I1017 19:08:36.734610   85117 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1017 19:08:36.734624   85117 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1017 19:08:36.734634   85117 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1017 19:08:36.734646   85117 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1017 19:08:36.734697   85117 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1017 19:08:36.735591   85117 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1017 19:08:36.735609   85117 command_runner.go:130] > # This option supports live configuration reload.
	I1017 19:08:36.735623   85117 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1017 19:08:36.735636   85117 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1017 19:08:36.735643   85117 command_runner.go:130] > # the cgroup blockio controller.
	I1017 19:08:36.735656   85117 command_runner.go:130] > # blockio_config_file = ""
	I1017 19:08:36.735670   85117 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1017 19:08:36.735675   85117 command_runner.go:130] > # blockio parameters.
	I1017 19:08:36.735681   85117 command_runner.go:130] > # blockio_reload = false
	I1017 19:08:36.735706   85117 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1017 19:08:36.735733   85117 command_runner.go:130] > # irqbalance daemon.
	I1017 19:08:36.735812   85117 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1017 19:08:36.735833   85117 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1017 19:08:36.736170   85117 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1017 19:08:36.736193   85117 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1017 19:08:36.736203   85117 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1017 19:08:36.736229   85117 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1017 19:08:36.736240   85117 command_runner.go:130] > # This option supports live configuration reload.
	I1017 19:08:36.736246   85117 command_runner.go:130] > # rdt_config_file = ""
	I1017 19:08:36.736258   85117 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1017 19:08:36.736268   85117 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1017 19:08:36.736300   85117 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1017 19:08:36.736312   85117 command_runner.go:130] > # separate_pull_cgroup = ""
	I1017 19:08:36.736321   85117 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1017 19:08:36.736329   85117 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1017 19:08:36.736335   85117 command_runner.go:130] > # will be added.
	I1017 19:08:36.736341   85117 command_runner.go:130] > # default_capabilities = [
	I1017 19:08:36.736349   85117 command_runner.go:130] > # 	"CHOWN",
	I1017 19:08:36.736355   85117 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1017 19:08:36.736360   85117 command_runner.go:130] > # 	"FSETID",
	I1017 19:08:36.736366   85117 command_runner.go:130] > # 	"FOWNER",
	I1017 19:08:36.736374   85117 command_runner.go:130] > # 	"SETGID",
	I1017 19:08:36.736379   85117 command_runner.go:130] > # 	"SETUID",
	I1017 19:08:36.736384   85117 command_runner.go:130] > # 	"SETPCAP",
	I1017 19:08:36.736392   85117 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1017 19:08:36.736401   85117 command_runner.go:130] > # 	"KILL",
	I1017 19:08:36.736409   85117 command_runner.go:130] > # ]
	I1017 19:08:36.736420   85117 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1017 19:08:36.736433   85117 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1017 19:08:36.736444   85117 command_runner.go:130] > # add_inheritable_capabilities = false
	I1017 19:08:36.736452   85117 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1017 19:08:36.736463   85117 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1017 19:08:36.736472   85117 command_runner.go:130] > default_sysctls = [
	I1017 19:08:36.736482   85117 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1017 19:08:36.736490   85117 command_runner.go:130] > ]
	I1017 19:08:36.736501   85117 command_runner.go:130] > # List of devices on the host that a
	I1017 19:08:36.736513   85117 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1017 19:08:36.736521   85117 command_runner.go:130] > # allowed_devices = [
	I1017 19:08:36.736526   85117 command_runner.go:130] > # 	"/dev/fuse",
	I1017 19:08:36.736534   85117 command_runner.go:130] > # ]
	I1017 19:08:36.736541   85117 command_runner.go:130] > # List of additional devices. specified as
	I1017 19:08:36.736569   85117 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1017 19:08:36.736580   85117 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1017 19:08:36.736589   85117 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1017 19:08:36.736598   85117 command_runner.go:130] > # additional_devices = [
	I1017 19:08:36.736602   85117 command_runner.go:130] > # ]
	I1017 19:08:36.736612   85117 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1017 19:08:36.736621   85117 command_runner.go:130] > # cdi_spec_dirs = [
	I1017 19:08:36.736627   85117 command_runner.go:130] > # 	"/etc/cdi",
	I1017 19:08:36.736635   85117 command_runner.go:130] > # 	"/var/run/cdi",
	I1017 19:08:36.736640   85117 command_runner.go:130] > # ]
	I1017 19:08:36.736652   85117 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1017 19:08:36.736664   85117 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1017 19:08:36.736673   85117 command_runner.go:130] > # Defaults to false.
	I1017 19:08:36.736684   85117 command_runner.go:130] > # device_ownership_from_security_context = false
	I1017 19:08:36.736696   85117 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1017 19:08:36.736707   85117 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1017 19:08:36.736715   85117 command_runner.go:130] > # hooks_dir = [
	I1017 19:08:36.736723   85117 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1017 19:08:36.736732   85117 command_runner.go:130] > # ]
	I1017 19:08:36.736744   85117 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1017 19:08:36.736756   85117 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1017 19:08:36.736767   85117 command_runner.go:130] > # its default mounts from the following two files:
	I1017 19:08:36.736774   85117 command_runner.go:130] > #
	I1017 19:08:36.736783   85117 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1017 19:08:36.736795   85117 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1017 19:08:36.736809   85117 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1017 19:08:36.736817   85117 command_runner.go:130] > #
	I1017 19:08:36.736826   85117 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1017 19:08:36.736838   85117 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1017 19:08:36.736850   85117 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1017 19:08:36.736858   85117 command_runner.go:130] > #      only add mounts it finds in this file.
	I1017 19:08:36.736865   85117 command_runner.go:130] > #
	I1017 19:08:36.736871   85117 command_runner.go:130] > # default_mounts_file = ""
	I1017 19:08:36.736882   85117 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1017 19:08:36.736894   85117 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1017 19:08:36.736914   85117 command_runner.go:130] > pids_limit = 1024
	I1017 19:08:36.736938   85117 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1017 19:08:36.736957   85117 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1017 19:08:36.736976   85117 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1017 19:08:36.737004   85117 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1017 19:08:36.737015   85117 command_runner.go:130] > # log_size_max = -1
	I1017 19:08:36.737028   85117 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1017 19:08:36.737037   85117 command_runner.go:130] > # log_to_journald = false
	I1017 19:08:36.737051   85117 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1017 19:08:36.737062   85117 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1017 19:08:36.737073   85117 command_runner.go:130] > # Path to directory for container attach sockets.
	I1017 19:08:36.737084   85117 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1017 19:08:36.737094   85117 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1017 19:08:36.737102   85117 command_runner.go:130] > # bind_mount_prefix = ""
	I1017 19:08:36.737107   85117 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1017 19:08:36.737113   85117 command_runner.go:130] > # read_only = false
	I1017 19:08:36.737122   85117 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1017 19:08:36.737131   85117 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1017 19:08:36.737137   85117 command_runner.go:130] > # live configuration reload.
	I1017 19:08:36.737141   85117 command_runner.go:130] > # log_level = "info"
	I1017 19:08:36.737149   85117 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1017 19:08:36.737153   85117 command_runner.go:130] > # This option supports live configuration reload.
	I1017 19:08:36.737159   85117 command_runner.go:130] > # log_filter = ""
	I1017 19:08:36.737165   85117 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1017 19:08:36.737175   85117 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1017 19:08:36.737181   85117 command_runner.go:130] > # separated by comma.
	I1017 19:08:36.737189   85117 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1017 19:08:36.737199   85117 command_runner.go:130] > # uid_mappings = ""
	I1017 19:08:36.737214   85117 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1017 19:08:36.737222   85117 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1017 19:08:36.737227   85117 command_runner.go:130] > # separated by comma.
	I1017 19:08:36.737234   85117 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1017 19:08:36.737238   85117 command_runner.go:130] > # gid_mappings = ""
	I1017 19:08:36.737244   85117 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1017 19:08:36.737252   85117 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1017 19:08:36.737258   85117 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1017 19:08:36.737268   85117 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1017 19:08:36.737274   85117 command_runner.go:130] > # minimum_mappable_uid = -1
	I1017 19:08:36.737280   85117 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1017 19:08:36.737285   85117 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1017 19:08:36.737293   85117 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1017 19:08:36.737301   85117 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1017 19:08:36.737306   85117 command_runner.go:130] > # minimum_mappable_gid = -1
	I1017 19:08:36.737312   85117 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1017 19:08:36.737318   85117 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1017 19:08:36.737326   85117 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1017 19:08:36.737330   85117 command_runner.go:130] > # ctr_stop_timeout = 30
	I1017 19:08:36.737335   85117 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1017 19:08:36.737343   85117 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1017 19:08:36.737349   85117 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1017 19:08:36.737354   85117 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1017 19:08:36.737360   85117 command_runner.go:130] > drop_infra_ctr = false
	I1017 19:08:36.737365   85117 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1017 19:08:36.737370   85117 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1017 19:08:36.737377   85117 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1017 19:08:36.737382   85117 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1017 19:08:36.737388   85117 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1017 19:08:36.737396   85117 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1017 19:08:36.737402   85117 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1017 19:08:36.737409   85117 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1017 19:08:36.737412   85117 command_runner.go:130] > # shared_cpuset = ""
	I1017 19:08:36.737421   85117 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1017 19:08:36.737428   85117 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1017 19:08:36.737434   85117 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1017 19:08:36.737441   85117 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1017 19:08:36.737447   85117 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1017 19:08:36.737452   85117 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1017 19:08:36.737460   85117 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1017 19:08:36.737464   85117 command_runner.go:130] > # enable_criu_support = false
	I1017 19:08:36.737471   85117 command_runner.go:130] > # Enable/disable the generation of the container,
	I1017 19:08:36.737477   85117 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1017 19:08:36.737484   85117 command_runner.go:130] > # enable_pod_events = false
	I1017 19:08:36.737490   85117 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1017 19:08:36.737499   85117 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1017 19:08:36.737507   85117 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1017 19:08:36.737510   85117 command_runner.go:130] > # default_runtime = "runc"
	I1017 19:08:36.737518   85117 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1017 19:08:36.737525   85117 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1017 19:08:36.737537   85117 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1017 19:08:36.737545   85117 command_runner.go:130] > # creation as a file is not desired either.
	I1017 19:08:36.737567   85117 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1017 19:08:36.737578   85117 command_runner.go:130] > # the hostname is being managed dynamically.
	I1017 19:08:36.737585   85117 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1017 19:08:36.737590   85117 command_runner.go:130] > # ]
	I1017 19:08:36.737597   85117 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1017 19:08:36.737605   85117 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1017 19:08:36.737613   85117 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1017 19:08:36.737618   85117 command_runner.go:130] > # Each entry in the table should follow the format:
	I1017 19:08:36.737623   85117 command_runner.go:130] > #
	I1017 19:08:36.737628   85117 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1017 19:08:36.737635   85117 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1017 19:08:36.737639   85117 command_runner.go:130] > # runtime_type = "oci"
	I1017 19:08:36.737698   85117 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1017 19:08:36.737709   85117 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1017 19:08:36.737719   85117 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1017 19:08:36.737725   85117 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1017 19:08:36.737735   85117 command_runner.go:130] > # monitor_env = []
	I1017 19:08:36.737744   85117 command_runner.go:130] > # privileged_without_host_devices = false
	I1017 19:08:36.737748   85117 command_runner.go:130] > # allowed_annotations = []
	I1017 19:08:36.737754   85117 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1017 19:08:36.737763   85117 command_runner.go:130] > # Where:
	I1017 19:08:36.737771   85117 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1017 19:08:36.737778   85117 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1017 19:08:36.737786   85117 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1017 19:08:36.737794   85117 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1017 19:08:36.737798   85117 command_runner.go:130] > #   in $PATH.
	I1017 19:08:36.737803   85117 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1017 19:08:36.737810   85117 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1017 19:08:36.737816   85117 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1017 19:08:36.737821   85117 command_runner.go:130] > #   state.
	I1017 19:08:36.737828   85117 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1017 19:08:36.737836   85117 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1017 19:08:36.737842   85117 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1017 19:08:36.737849   85117 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1017 19:08:36.737856   85117 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1017 19:08:36.737865   85117 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1017 19:08:36.737872   85117 command_runner.go:130] > #   The currently recognized values are:
	I1017 19:08:36.737878   85117 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1017 19:08:36.737892   85117 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1017 19:08:36.737900   85117 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1017 19:08:36.737906   85117 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1017 19:08:36.737916   85117 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1017 19:08:36.737925   85117 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1017 19:08:36.737935   85117 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1017 19:08:36.737943   85117 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1017 19:08:36.737951   85117 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1017 19:08:36.737958   85117 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1017 19:08:36.737966   85117 command_runner.go:130] > #   deprecated option "conmon".
	I1017 19:08:36.737973   85117 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1017 19:08:36.737981   85117 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1017 19:08:36.737987   85117 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1017 19:08:36.737995   85117 command_runner.go:130] > #   should be moved to the container's cgroup
	I1017 19:08:36.738001   85117 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1017 19:08:36.738010   85117 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1017 19:08:36.738019   85117 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1017 19:08:36.738027   85117 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1017 19:08:36.738030   85117 command_runner.go:130] > #
	I1017 19:08:36.738038   85117 command_runner.go:130] > # Using the seccomp notifier feature:
	I1017 19:08:36.738041   85117 command_runner.go:130] > #
	I1017 19:08:36.738046   85117 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1017 19:08:36.738055   85117 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1017 19:08:36.738060   85117 command_runner.go:130] > #
	I1017 19:08:36.738067   85117 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1017 19:08:36.738075   85117 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1017 19:08:36.738080   85117 command_runner.go:130] > #
	I1017 19:08:36.738086   85117 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1017 19:08:36.738090   85117 command_runner.go:130] > # feature.
	I1017 19:08:36.738092   85117 command_runner.go:130] > #
	I1017 19:08:36.738100   85117 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1017 19:08:36.738108   85117 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1017 19:08:36.738114   85117 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1017 19:08:36.738123   85117 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1017 19:08:36.738132   85117 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1017 19:08:36.738137   85117 command_runner.go:130] > #
	I1017 19:08:36.738143   85117 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1017 19:08:36.738151   85117 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1017 19:08:36.738156   85117 command_runner.go:130] > #
	I1017 19:08:36.738162   85117 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1017 19:08:36.738169   85117 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1017 19:08:36.738172   85117 command_runner.go:130] > #
	I1017 19:08:36.738178   85117 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1017 19:08:36.738186   85117 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1017 19:08:36.738190   85117 command_runner.go:130] > # limitation.
	I1017 19:08:36.738198   85117 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1017 19:08:36.738202   85117 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1017 19:08:36.738212   85117 command_runner.go:130] > runtime_type = "oci"
	I1017 19:08:36.738218   85117 command_runner.go:130] > runtime_root = "/run/runc"
	I1017 19:08:36.738222   85117 command_runner.go:130] > runtime_config_path = ""
	I1017 19:08:36.738228   85117 command_runner.go:130] > monitor_path = "/usr/bin/conmon"
	I1017 19:08:36.738233   85117 command_runner.go:130] > monitor_cgroup = "pod"
	I1017 19:08:36.738239   85117 command_runner.go:130] > monitor_exec_cgroup = ""
	I1017 19:08:36.738242   85117 command_runner.go:130] > monitor_env = [
	I1017 19:08:36.738250   85117 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1017 19:08:36.738253   85117 command_runner.go:130] > ]
	I1017 19:08:36.738258   85117 command_runner.go:130] > privileged_without_host_devices = false
	I1017 19:08:36.738270   85117 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1017 19:08:36.738277   85117 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1017 19:08:36.738283   85117 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1017 19:08:36.738302   85117 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1017 19:08:36.738315   85117 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1017 19:08:36.738320   85117 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1017 19:08:36.738331   85117 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1017 19:08:36.738339   85117 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1017 19:08:36.738347   85117 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1017 19:08:36.738354   85117 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1017 19:08:36.738359   85117 command_runner.go:130] > # Example:
	I1017 19:08:36.738364   85117 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1017 19:08:36.738368   85117 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1017 19:08:36.738373   85117 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1017 19:08:36.738378   85117 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1017 19:08:36.738381   85117 command_runner.go:130] > # cpuset = 0
	I1017 19:08:36.738384   85117 command_runner.go:130] > # cpushares = "0-1"
	I1017 19:08:36.738388   85117 command_runner.go:130] > # Where:
	I1017 19:08:36.738392   85117 command_runner.go:130] > # The workload name is workload-type.
	I1017 19:08:36.738399   85117 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1017 19:08:36.738406   85117 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1017 19:08:36.738411   85117 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1017 19:08:36.738419   85117 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1017 19:08:36.738427   85117 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1017 19:08:36.738431   85117 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1017 19:08:36.738437   85117 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1017 19:08:36.738443   85117 command_runner.go:130] > # Default value is set to true
	I1017 19:08:36.738447   85117 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1017 19:08:36.738454   85117 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1017 19:08:36.738459   85117 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1017 19:08:36.738465   85117 command_runner.go:130] > # Default value is set to 'false'
	I1017 19:08:36.738470   85117 command_runner.go:130] > # disable_hostport_mapping = false
	I1017 19:08:36.738478   85117 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1017 19:08:36.738484   85117 command_runner.go:130] > #
	I1017 19:08:36.738489   85117 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1017 19:08:36.738500   85117 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1017 19:08:36.738508   85117 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1017 19:08:36.738517   85117 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1017 19:08:36.738522   85117 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1017 19:08:36.738529   85117 command_runner.go:130] > [crio.image]
	I1017 19:08:36.738535   85117 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1017 19:08:36.738541   85117 command_runner.go:130] > # default_transport = "docker://"
	I1017 19:08:36.738547   85117 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1017 19:08:36.738573   85117 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1017 19:08:36.738580   85117 command_runner.go:130] > # global_auth_file = ""
	I1017 19:08:36.738589   85117 command_runner.go:130] > # The image used to instantiate infra containers.
	I1017 19:08:36.738594   85117 command_runner.go:130] > # This option supports live configuration reload.
	I1017 19:08:36.738601   85117 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10.1"
	I1017 19:08:36.738608   85117 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1017 19:08:36.738616   85117 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1017 19:08:36.738622   85117 command_runner.go:130] > # This option supports live configuration reload.
	I1017 19:08:36.738626   85117 command_runner.go:130] > # pause_image_auth_file = ""
	I1017 19:08:36.738634   85117 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1017 19:08:36.738642   85117 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1017 19:08:36.738648   85117 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1017 19:08:36.738656   85117 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1017 19:08:36.738660   85117 command_runner.go:130] > # pause_command = "/pause"
	I1017 19:08:36.738668   85117 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1017 19:08:36.738674   85117 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1017 19:08:36.738690   85117 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1017 19:08:36.738700   85117 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1017 19:08:36.738709   85117 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1017 19:08:36.738718   85117 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1017 19:08:36.738722   85117 command_runner.go:130] > # pinned_images = [
	I1017 19:08:36.738727   85117 command_runner.go:130] > # ]
	I1017 19:08:36.738734   85117 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1017 19:08:36.738742   85117 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1017 19:08:36.738748   85117 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1017 19:08:36.738756   85117 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1017 19:08:36.738762   85117 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1017 19:08:36.738768   85117 command_runner.go:130] > # signature_policy = ""
	I1017 19:08:36.738773   85117 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1017 19:08:36.738781   85117 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1017 19:08:36.738787   85117 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1017 19:08:36.738792   85117 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1017 19:08:36.738798   85117 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1017 19:08:36.738802   85117 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1017 19:08:36.738808   85117 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1017 19:08:36.738813   85117 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1017 19:08:36.738817   85117 command_runner.go:130] > # changing them here.
	I1017 19:08:36.738820   85117 command_runner.go:130] > # insecure_registries = [
	I1017 19:08:36.738823   85117 command_runner.go:130] > # ]
	I1017 19:08:36.738828   85117 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1017 19:08:36.738833   85117 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1017 19:08:36.738836   85117 command_runner.go:130] > # image_volumes = "mkdir"
	I1017 19:08:36.738841   85117 command_runner.go:130] > # Temporary directory to use for storing big files
	I1017 19:08:36.738845   85117 command_runner.go:130] > # big_files_temporary_dir = ""
	I1017 19:08:36.738850   85117 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1017 19:08:36.738853   85117 command_runner.go:130] > # CNI plugins.
	I1017 19:08:36.738856   85117 command_runner.go:130] > [crio.network]
	I1017 19:08:36.738861   85117 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1017 19:08:36.738869   85117 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1017 19:08:36.738873   85117 command_runner.go:130] > # cni_default_network = ""
	I1017 19:08:36.738880   85117 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1017 19:08:36.738884   85117 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1017 19:08:36.738892   85117 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1017 19:08:36.738895   85117 command_runner.go:130] > # plugin_dirs = [
	I1017 19:08:36.738901   85117 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1017 19:08:36.738904   85117 command_runner.go:130] > # ]
	I1017 19:08:36.738909   85117 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1017 19:08:36.738915   85117 command_runner.go:130] > [crio.metrics]
	I1017 19:08:36.738919   85117 command_runner.go:130] > # Globally enable or disable metrics support.
	I1017 19:08:36.738925   85117 command_runner.go:130] > enable_metrics = true
	I1017 19:08:36.738929   85117 command_runner.go:130] > # Specify enabled metrics collectors.
	I1017 19:08:36.738939   85117 command_runner.go:130] > # Per default all metrics are enabled.
	I1017 19:08:36.738948   85117 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1017 19:08:36.738957   85117 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1017 19:08:36.738966   85117 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1017 19:08:36.738969   85117 command_runner.go:130] > # metrics_collectors = [
	I1017 19:08:36.738975   85117 command_runner.go:130] > # 	"operations",
	I1017 19:08:36.738980   85117 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1017 19:08:36.738988   85117 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1017 19:08:36.738992   85117 command_runner.go:130] > # 	"operations_errors",
	I1017 19:08:36.738998   85117 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1017 19:08:36.739002   85117 command_runner.go:130] > # 	"image_pulls_by_name",
	I1017 19:08:36.739008   85117 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1017 19:08:36.739012   85117 command_runner.go:130] > # 	"image_pulls_failures",
	I1017 19:08:36.739019   85117 command_runner.go:130] > # 	"image_pulls_successes",
	I1017 19:08:36.739022   85117 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1017 19:08:36.739029   85117 command_runner.go:130] > # 	"image_layer_reuse",
	I1017 19:08:36.739033   85117 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1017 19:08:36.739037   85117 command_runner.go:130] > # 	"containers_oom_total",
	I1017 19:08:36.739041   85117 command_runner.go:130] > # 	"containers_oom",
	I1017 19:08:36.739047   85117 command_runner.go:130] > # 	"processes_defunct",
	I1017 19:08:36.739050   85117 command_runner.go:130] > # 	"operations_total",
	I1017 19:08:36.739057   85117 command_runner.go:130] > # 	"operations_latency_seconds",
	I1017 19:08:36.739061   85117 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1017 19:08:36.739068   85117 command_runner.go:130] > # 	"operations_errors_total",
	I1017 19:08:36.739071   85117 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1017 19:08:36.739078   85117 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1017 19:08:36.739082   85117 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1017 19:08:36.739088   85117 command_runner.go:130] > # 	"image_pulls_success_total",
	I1017 19:08:36.739092   85117 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1017 19:08:36.739099   85117 command_runner.go:130] > # 	"containers_oom_count_total",
	I1017 19:08:36.739103   85117 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1017 19:08:36.739110   85117 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1017 19:08:36.739112   85117 command_runner.go:130] > # ]
	I1017 19:08:36.739119   85117 command_runner.go:130] > # The port on which the metrics server will listen.
	I1017 19:08:36.739125   85117 command_runner.go:130] > # metrics_port = 9090
	I1017 19:08:36.739132   85117 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1017 19:08:36.739136   85117 command_runner.go:130] > # metrics_socket = ""
	I1017 19:08:36.739143   85117 command_runner.go:130] > # The certificate for the secure metrics server.
	I1017 19:08:36.739148   85117 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1017 19:08:36.739156   85117 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1017 19:08:36.739161   85117 command_runner.go:130] > # certificate on any modification event.
	I1017 19:08:36.739165   85117 command_runner.go:130] > # metrics_cert = ""
	I1017 19:08:36.739170   85117 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1017 19:08:36.739176   85117 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1017 19:08:36.739180   85117 command_runner.go:130] > # metrics_key = ""
	I1017 19:08:36.739188   85117 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1017 19:08:36.739191   85117 command_runner.go:130] > [crio.tracing]
	I1017 19:08:36.739200   85117 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1017 19:08:36.739203   85117 command_runner.go:130] > # enable_tracing = false
	I1017 19:08:36.739214   85117 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1017 19:08:36.739221   85117 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1017 19:08:36.739227   85117 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1017 19:08:36.739240   85117 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1017 19:08:36.739246   85117 command_runner.go:130] > # CRI-O NRI configuration.
	I1017 19:08:36.739250   85117 command_runner.go:130] > [crio.nri]
	I1017 19:08:36.739254   85117 command_runner.go:130] > # Globally enable or disable NRI.
	I1017 19:08:36.739260   85117 command_runner.go:130] > # enable_nri = false
	I1017 19:08:36.739264   85117 command_runner.go:130] > # NRI socket to listen on.
	I1017 19:08:36.739271   85117 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1017 19:08:36.739275   85117 command_runner.go:130] > # NRI plugin directory to use.
	I1017 19:08:36.739280   85117 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1017 19:08:36.739287   85117 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1017 19:08:36.739291   85117 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1017 19:08:36.739299   85117 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1017 19:08:36.739303   85117 command_runner.go:130] > # nri_disable_connections = false
	I1017 19:08:36.739310   85117 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1017 19:08:36.739315   85117 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1017 19:08:36.739325   85117 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1017 19:08:36.739332   85117 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1017 19:08:36.739337   85117 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1017 19:08:36.739343   85117 command_runner.go:130] > [crio.stats]
	I1017 19:08:36.739348   85117 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1017 19:08:36.739353   85117 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1017 19:08:36.739360   85117 command_runner.go:130] > # stats_collection_period = 0
	I1017 19:08:36.739439   85117 cni.go:84] Creating CNI manager for ""
	I1017 19:08:36.739451   85117 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1017 19:08:36.739480   85117 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 19:08:36.739504   85117 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.205 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-016863 NodeName:functional-016863 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.205"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.205 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 19:08:36.739644   85117 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.205
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-016863"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.205"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.205"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 19:08:36.739707   85117 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:08:36.752377   85117 command_runner.go:130] > kubeadm
	I1017 19:08:36.752404   85117 command_runner.go:130] > kubectl
	I1017 19:08:36.752408   85117 command_runner.go:130] > kubelet
	I1017 19:08:36.752864   85117 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:08:36.752933   85117 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 19:08:36.764722   85117 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1017 19:08:36.786673   85117 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:08:36.808021   85117 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1017 19:08:36.828821   85117 ssh_runner.go:195] Run: grep 192.168.39.205	control-plane.minikube.internal$ /etc/hosts
	I1017 19:08:36.833177   85117 command_runner.go:130] > 192.168.39.205	control-plane.minikube.internal
	I1017 19:08:36.833246   85117 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:08:37.010934   85117 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:08:37.030439   85117 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863 for IP: 192.168.39.205
	I1017 19:08:37.030467   85117 certs.go:195] generating shared ca certs ...
	I1017 19:08:37.030485   85117 certs.go:227] acquiring lock for ca certs: {Name:mka410ab7d3b92eaaa0d0545223807c0ba196baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:08:37.030690   85117 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-75534/.minikube/ca.key
	I1017 19:08:37.030747   85117 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-75534/.minikube/proxy-client-ca.key
	I1017 19:08:37.030762   85117 certs.go:257] generating profile certs ...
	I1017 19:08:37.030878   85117 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/client.key
	I1017 19:08:37.030972   85117 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/apiserver.key.c24585d5
	I1017 19:08:37.031049   85117 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/proxy-client.key
	I1017 19:08:37.031067   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1017 19:08:37.031086   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1017 19:08:37.031102   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1017 19:08:37.031121   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1017 19:08:37.031138   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1017 19:08:37.031155   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1017 19:08:37.031179   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1017 19:08:37.031195   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1017 19:08:37.031270   85117 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/79439.pem (1338 bytes)
	W1017 19:08:37.031314   85117 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-75534/.minikube/certs/79439_empty.pem, impossibly tiny 0 bytes
	I1017 19:08:37.031328   85117 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca-key.pem (1679 bytes)
	I1017 19:08:37.031364   85117 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem (1082 bytes)
	I1017 19:08:37.031395   85117 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:08:37.031426   85117 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/key.pem (1679 bytes)
	I1017 19:08:37.031478   85117 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/ssl/certs/794392.pem (1708 bytes)
	I1017 19:08:37.031518   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/ssl/certs/794392.pem -> /usr/share/ca-certificates/794392.pem
	I1017 19:08:37.031537   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:08:37.031564   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/79439.pem -> /usr/share/ca-certificates/79439.pem
	I1017 19:08:37.032341   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:08:37.064212   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 19:08:37.094935   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:08:37.126973   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 19:08:37.157540   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1017 19:08:37.187168   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 19:08:37.217543   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 19:08:37.247400   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 19:08:37.278758   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/ssl/certs/794392.pem --> /usr/share/ca-certificates/794392.pem (1708 bytes)
	I1017 19:08:37.308088   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:08:37.338377   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/certs/79439.pem --> /usr/share/ca-certificates/79439.pem (1338 bytes)
	I1017 19:08:37.369350   85117 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 19:08:37.390154   85117 ssh_runner.go:195] Run: openssl version
	I1017 19:08:37.397183   85117 command_runner.go:130] > OpenSSL 3.4.1 11 Feb 2025 (Library: OpenSSL 3.4.1 11 Feb 2025)
	I1017 19:08:37.397310   85117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/79439.pem && ln -fs /usr/share/ca-certificates/79439.pem /etc/ssl/certs/79439.pem"
	I1017 19:08:37.411628   85117 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/79439.pem
	I1017 19:08:37.417085   85117 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 17 19:05 /usr/share/ca-certificates/79439.pem
	I1017 19:08:37.417178   85117 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:05 /usr/share/ca-certificates/79439.pem
	I1017 19:08:37.417250   85117 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/79439.pem
	I1017 19:08:37.424962   85117 command_runner.go:130] > 51391683
	I1017 19:08:37.425158   85117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/79439.pem /etc/ssl/certs/51391683.0"
	I1017 19:08:37.437578   85117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/794392.pem && ln -fs /usr/share/ca-certificates/794392.pem /etc/ssl/certs/794392.pem"
	I1017 19:08:37.452363   85117 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/794392.pem
	I1017 19:08:37.458096   85117 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 17 19:05 /usr/share/ca-certificates/794392.pem
	I1017 19:08:37.458164   85117 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:05 /usr/share/ca-certificates/794392.pem
	I1017 19:08:37.458223   85117 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/794392.pem
	I1017 19:08:37.466074   85117 command_runner.go:130] > 3ec20f2e
	I1017 19:08:37.466249   85117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/794392.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 19:08:37.478828   85117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:08:37.493772   85117 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:08:37.499621   85117 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 17 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:08:37.499822   85117 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:08:37.499886   85117 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:08:37.507945   85117 command_runner.go:130] > b5213941
	I1017 19:08:37.508223   85117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:08:37.520563   85117 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:08:37.526401   85117 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:08:37.526439   85117 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1017 19:08:37.526449   85117 command_runner.go:130] > Device: 253,1	Inode: 1054372     Links: 1
	I1017 19:08:37.526460   85117 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1017 19:08:37.526477   85117 command_runner.go:130] > Access: 2025-10-17 19:06:07.267694920 +0000
	I1017 19:08:37.526489   85117 command_runner.go:130] > Modify: 2025-10-17 19:06:07.267694920 +0000
	I1017 19:08:37.526500   85117 command_runner.go:130] > Change: 2025-10-17 19:06:07.267694920 +0000
	I1017 19:08:37.526510   85117 command_runner.go:130] >  Birth: 2025-10-17 19:06:07.267694920 +0000
	I1017 19:08:37.526610   85117 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 19:08:37.533974   85117 command_runner.go:130] > Certificate will not expire
	I1017 19:08:37.534188   85117 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 19:08:37.541725   85117 command_runner.go:130] > Certificate will not expire
	I1017 19:08:37.541833   85117 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 19:08:37.549277   85117 command_runner.go:130] > Certificate will not expire
	I1017 19:08:37.549348   85117 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 19:08:37.556865   85117 command_runner.go:130] > Certificate will not expire
	I1017 19:08:37.556943   85117 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 19:08:37.564379   85117 command_runner.go:130] > Certificate will not expire
	I1017 19:08:37.564452   85117 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 19:08:37.571575   85117 command_runner.go:130] > Certificate will not expire
	I1017 19:08:37.571807   85117 kubeadm.go:400] StartCluster: {Name:functional-016863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.1 ClusterName:functional-016863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:08:37.571943   85117 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:08:37.572009   85117 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:08:37.614275   85117 command_runner.go:130] > 5052ee3b4b13e54f7516a211d580d31d7e4856f34ebe5b5bc8a1778244018fb0
	I1017 19:08:37.614306   85117 command_runner.go:130] > 56c02355399031d32d66f1780ee1bc7396eeb5eb1b454f946254fe345879e8e0
	I1017 19:08:37.614315   85117 command_runner.go:130] > 56048147246b1d30ce16d066a4bbb216f1f7c9b1459e21fa60ee108fdd3aa42a
	I1017 19:08:37.614325   85117 command_runner.go:130] > 1b1f7dfe245a6d20e55f02381f27ec11e1eec3bf32b8112aaab88ea95c008e93
	I1017 19:08:37.614332   85117 command_runner.go:130] > b4db2cb7b47399fb64d0f31922d185a1ae009961ae05b56d9514db6f489a25eb
	I1017 19:08:37.614340   85117 command_runner.go:130] > d6eeaf9720fb0a5853cabba8afe0f0c64370fd422e21db9af2a9b6ce4b9aecc1
	I1017 19:08:37.614347   85117 command_runner.go:130] > 26c7a235bb67e91ab6abf0c0282c65a526c3d2fc628ec6008956402a02d5b1e8
	I1017 19:08:37.614369   85117 command_runner.go:130] > 171e623260fdb36d39493caf1c0b8c10efb097287233e2565304b12ece716a85
	I1017 19:08:37.614383   85117 command_runner.go:130] > 0fe4cc88e7a7a757f4debf7f3ff8f76bef81d0a36e83bd994df86baa42f47a71
	I1017 19:08:37.614397   85117 command_runner.go:130] > 86dba9687f70280ffaa952d354e90ec1a4ff74d73869d9360c56690901ad9461
	I1017 19:08:37.614406   85117 command_runner.go:130] > 4d4ae675fa012cf6e18dd10516f8c83d32b364f3e27d8068722234a797bc7b1a
	I1017 19:08:37.614460   85117 cri.go:89] found id: "5052ee3b4b13e54f7516a211d580d31d7e4856f34ebe5b5bc8a1778244018fb0"
	I1017 19:08:37.614475   85117 cri.go:89] found id: "56c02355399031d32d66f1780ee1bc7396eeb5eb1b454f946254fe345879e8e0"
	I1017 19:08:37.614481   85117 cri.go:89] found id: "56048147246b1d30ce16d066a4bbb216f1f7c9b1459e21fa60ee108fdd3aa42a"
	I1017 19:08:37.614486   85117 cri.go:89] found id: "1b1f7dfe245a6d20e55f02381f27ec11e1eec3bf32b8112aaab88ea95c008e93"
	I1017 19:08:37.614490   85117 cri.go:89] found id: "b4db2cb7b47399fb64d0f31922d185a1ae009961ae05b56d9514db6f489a25eb"
	I1017 19:08:37.614498   85117 cri.go:89] found id: "d6eeaf9720fb0a5853cabba8afe0f0c64370fd422e21db9af2a9b6ce4b9aecc1"
	I1017 19:08:37.614513   85117 cri.go:89] found id: "26c7a235bb67e91ab6abf0c0282c65a526c3d2fc628ec6008956402a02d5b1e8"
	I1017 19:08:37.614519   85117 cri.go:89] found id: "171e623260fdb36d39493caf1c0b8c10efb097287233e2565304b12ece716a85"
	I1017 19:08:37.614521   85117 cri.go:89] found id: "0fe4cc88e7a7a757f4debf7f3ff8f76bef81d0a36e83bd994df86baa42f47a71"
	I1017 19:08:37.614530   85117 cri.go:89] found id: "86dba9687f70280ffaa952d354e90ec1a4ff74d73869d9360c56690901ad9461"
	I1017 19:08:37.614535   85117 cri.go:89] found id: "4d4ae675fa012cf6e18dd10516f8c83d32b364f3e27d8068722234a797bc7b1a"
	I1017 19:08:37.614538   85117 cri.go:89] found id: ""
	I1017 19:08:37.614600   85117 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-amd64 start -p functional-016863 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 13m58.183759081s for "functional-016863" cluster.
I1017 19:20:54.710703   79439 config.go:182] Loaded profile config "functional-016863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/SoftStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-016863 -n functional-016863
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-016863 -n functional-016863: exit status 2 (243.405069ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/SoftStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-016863 logs -n 25
E1017 19:23:47.741463   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-016863 logs -n 25: (6m36.168045951s)
helpers_test.go:260: TestFunctional/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ addons-768633 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                           │ addons-768633     │ jenkins │ v1.37.0 │ 17 Oct 25 19:00 UTC │ 17 Oct 25 19:00 UTC │
	│ ip      │ addons-768633 ip                                                                                                                                  │ addons-768633     │ jenkins │ v1.37.0 │ 17 Oct 25 19:01 UTC │ 17 Oct 25 19:01 UTC │
	│ addons  │ addons-768633 addons disable ingress-dns --alsologtostderr -v=1                                                                                   │ addons-768633     │ jenkins │ v1.37.0 │ 17 Oct 25 19:01 UTC │ 17 Oct 25 19:01 UTC │
	│ addons  │ addons-768633 addons disable ingress --alsologtostderr -v=1                                                                                       │ addons-768633     │ jenkins │ v1.37.0 │ 17 Oct 25 19:01 UTC │ 17 Oct 25 19:01 UTC │
	│ stop    │ -p addons-768633                                                                                                                                  │ addons-768633     │ jenkins │ v1.37.0 │ 17 Oct 25 19:01 UTC │ 17 Oct 25 19:03 UTC │
	│ addons  │ enable dashboard -p addons-768633                                                                                                                 │ addons-768633     │ jenkins │ v1.37.0 │ 17 Oct 25 19:03 UTC │ 17 Oct 25 19:03 UTC │
	│ addons  │ disable dashboard -p addons-768633                                                                                                                │ addons-768633     │ jenkins │ v1.37.0 │ 17 Oct 25 19:03 UTC │ 17 Oct 25 19:03 UTC │
	│ addons  │ disable gvisor -p addons-768633                                                                                                                   │ addons-768633     │ jenkins │ v1.37.0 │ 17 Oct 25 19:03 UTC │ 17 Oct 25 19:03 UTC │
	│ delete  │ -p addons-768633                                                                                                                                  │ addons-768633     │ jenkins │ v1.37.0 │ 17 Oct 25 19:03 UTC │ 17 Oct 25 19:03 UTC │
	│ start   │ -p nospam-712449 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-712449 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:03 UTC │ 17 Oct 25 19:04 UTC │
	│ start   │ nospam-712449 --log_dir /tmp/nospam-712449 start --dry-run                                                                                        │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:04 UTC │                     │
	│ start   │ nospam-712449 --log_dir /tmp/nospam-712449 start --dry-run                                                                                        │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:04 UTC │                     │
	│ start   │ nospam-712449 --log_dir /tmp/nospam-712449 start --dry-run                                                                                        │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:04 UTC │                     │
	│ pause   │ nospam-712449 --log_dir /tmp/nospam-712449 pause                                                                                                  │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:04 UTC │ 17 Oct 25 19:04 UTC │
	│ pause   │ nospam-712449 --log_dir /tmp/nospam-712449 pause                                                                                                  │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:04 UTC │ 17 Oct 25 19:04 UTC │
	│ pause   │ nospam-712449 --log_dir /tmp/nospam-712449 pause                                                                                                  │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:04 UTC │ 17 Oct 25 19:04 UTC │
	│ unpause │ nospam-712449 --log_dir /tmp/nospam-712449 unpause                                                                                                │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:04 UTC │ 17 Oct 25 19:04 UTC │
	│ unpause │ nospam-712449 --log_dir /tmp/nospam-712449 unpause                                                                                                │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:04 UTC │ 17 Oct 25 19:04 UTC │
	│ unpause │ nospam-712449 --log_dir /tmp/nospam-712449 unpause                                                                                                │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:04 UTC │ 17 Oct 25 19:04 UTC │
	│ stop    │ nospam-712449 --log_dir /tmp/nospam-712449 stop                                                                                                   │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:04 UTC │ 17 Oct 25 19:05 UTC │
	│ stop    │ nospam-712449 --log_dir /tmp/nospam-712449 stop                                                                                                   │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:05 UTC │ 17 Oct 25 19:05 UTC │
	│ stop    │ nospam-712449 --log_dir /tmp/nospam-712449 stop                                                                                                   │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:05 UTC │ 17 Oct 25 19:05 UTC │
	│ delete  │ -p nospam-712449                                                                                                                                  │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:05 UTC │ 17 Oct 25 19:05 UTC │
	│ start   │ -p functional-016863 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false           │ functional-016863 │ jenkins │ v1.37.0 │ 17 Oct 25 19:05 UTC │ 17 Oct 25 19:06 UTC │
	│ start   │ -p functional-016863 --alsologtostderr -v=8                                                                                                       │ functional-016863 │ jenkins │ v1.37.0 │ 17 Oct 25 19:06 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:06:56
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:06:56.570682   85117 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:06:56.570809   85117 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:06:56.570820   85117 out.go:374] Setting ErrFile to fd 2...
	I1017 19:06:56.570826   85117 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:06:56.571105   85117 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-75534/.minikube/bin
	I1017 19:06:56.571578   85117 out.go:368] Setting JSON to false
	I1017 19:06:56.572426   85117 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6568,"bootTime":1760721449,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 19:06:56.572524   85117 start.go:141] virtualization: kvm guest
	I1017 19:06:56.574519   85117 out.go:179] * [functional-016863] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 19:06:56.575690   85117 notify.go:220] Checking for updates...
	I1017 19:06:56.575704   85117 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 19:06:56.577138   85117 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:06:56.578363   85117 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-75534/kubeconfig
	I1017 19:06:56.579669   85117 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-75534/.minikube
	I1017 19:06:56.581027   85117 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 19:06:56.582307   85117 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:06:56.583921   85117 config.go:182] Loaded profile config "functional-016863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:06:56.584037   85117 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:06:56.584492   85117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:06:56.584589   85117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:06:56.600478   85117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35877
	I1017 19:06:56.600991   85117 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:06:56.601750   85117 main.go:141] libmachine: Using API Version  1
	I1017 19:06:56.601786   85117 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:06:56.602161   85117 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:06:56.602390   85117 main.go:141] libmachine: (functional-016863) Calling .DriverName
	I1017 19:06:56.635697   85117 out.go:179] * Using the kvm2 driver based on existing profile
	I1017 19:06:56.637016   85117 start.go:305] selected driver: kvm2
	I1017 19:06:56.637040   85117 start.go:925] validating driver "kvm2" against &{Name:functional-016863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-016863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:06:56.637141   85117 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:06:56.637622   85117 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:06:56.637712   85117 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21753-75534/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1017 19:06:56.651574   85117 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1017 19:06:56.651619   85117 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21753-75534/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1017 19:06:56.665844   85117 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1017 19:06:56.666547   85117 cni.go:84] Creating CNI manager for ""
	I1017 19:06:56.666631   85117 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1017 19:06:56.666699   85117 start.go:349] cluster config:
	{Name:functional-016863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-016863 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:06:56.666812   85117 iso.go:125] acquiring lock: {Name:mk89d24a0bd9a0a8cf0564a4affa55e11eaff101 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:06:56.668638   85117 out.go:179] * Starting "functional-016863" primary control-plane node in "functional-016863" cluster
	I1017 19:06:56.669893   85117 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:06:56.669940   85117 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-75534/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 19:06:56.669951   85117 cache.go:58] Caching tarball of preloaded images
	I1017 19:06:56.670102   85117 preload.go:233] Found /home/jenkins/minikube-integration/21753-75534/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1017 19:06:56.670116   85117 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:06:56.670235   85117 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/config.json ...
	I1017 19:06:56.670445   85117 start.go:360] acquireMachinesLock for functional-016863: {Name:mke0c3abe726945d0c60793aa0bf26eb33df7fed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1017 19:06:56.670494   85117 start.go:364] duration metric: took 29.325µs to acquireMachinesLock for "functional-016863"
	I1017 19:06:56.670514   85117 start.go:96] Skipping create...Using existing machine configuration
	I1017 19:06:56.670524   85117 fix.go:54] fixHost starting: 
	I1017 19:06:56.670828   85117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:06:56.670877   85117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:06:56.683516   85117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42095
	I1017 19:06:56.683978   85117 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:06:56.684470   85117 main.go:141] libmachine: Using API Version  1
	I1017 19:06:56.684493   85117 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:06:56.684844   85117 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:06:56.685047   85117 main.go:141] libmachine: (functional-016863) Calling .DriverName
	I1017 19:06:56.685223   85117 main.go:141] libmachine: (functional-016863) Calling .GetState
	I1017 19:06:56.686913   85117 fix.go:112] recreateIfNeeded on functional-016863: state=Running err=<nil>
	W1017 19:06:56.686945   85117 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 19:06:56.688754   85117 out.go:252] * Updating the running kvm2 "functional-016863" VM ...
	I1017 19:06:56.688779   85117 machine.go:93] provisionDockerMachine start ...
	I1017 19:06:56.688795   85117 main.go:141] libmachine: (functional-016863) Calling .DriverName
	I1017 19:06:56.689021   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:06:56.691985   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:56.692501   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:06:56.692527   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:56.692713   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:06:56.692904   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:56.693142   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:56.693299   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:06:56.693474   85117 main.go:141] libmachine: Using SSH client type: native
	I1017 19:06:56.693724   85117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1017 19:06:56.693736   85117 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:06:56.799511   85117 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-016863
	
	I1017 19:06:56.799542   85117 main.go:141] libmachine: (functional-016863) Calling .GetMachineName
	I1017 19:06:56.799819   85117 buildroot.go:166] provisioning hostname "functional-016863"
	I1017 19:06:56.799862   85117 main.go:141] libmachine: (functional-016863) Calling .GetMachineName
	I1017 19:06:56.800154   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:06:56.803810   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:56.804342   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:06:56.804375   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:56.804593   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:06:56.804779   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:56.804950   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:56.805112   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:06:56.805279   85117 main.go:141] libmachine: Using SSH client type: native
	I1017 19:06:56.805490   85117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1017 19:06:56.805503   85117 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-016863 && echo "functional-016863" | sudo tee /etc/hostname
	I1017 19:06:56.929174   85117 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-016863
	
	I1017 19:06:56.929205   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:06:56.932429   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:56.932929   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:06:56.932954   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:56.933186   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:06:56.933423   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:56.933612   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:56.933826   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:06:56.934076   85117 main.go:141] libmachine: Using SSH client type: native
	I1017 19:06:56.934309   85117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1017 19:06:56.934326   85117 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-016863' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-016863/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-016863' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:06:57.042297   85117 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:06:57.042330   85117 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21753-75534/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-75534/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-75534/.minikube}
	I1017 19:06:57.042373   85117 buildroot.go:174] setting up certificates
	I1017 19:06:57.042382   85117 provision.go:84] configureAuth start
	I1017 19:06:57.042395   85117 main.go:141] libmachine: (functional-016863) Calling .GetMachineName
	I1017 19:06:57.042715   85117 main.go:141] libmachine: (functional-016863) Calling .GetIP
	I1017 19:06:57.045902   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:57.046469   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:06:57.046508   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:57.046778   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:06:57.049360   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:57.049857   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:06:57.049902   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:57.050076   85117 provision.go:143] copyHostCerts
	I1017 19:06:57.050123   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21753-75534/.minikube/ca.pem
	I1017 19:06:57.050183   85117 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-75534/.minikube/ca.pem, removing ...
	I1017 19:06:57.050205   85117 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-75534/.minikube/ca.pem
	I1017 19:06:57.050294   85117 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-75534/.minikube/ca.pem (1082 bytes)
	I1017 19:06:57.050425   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21753-75534/.minikube/cert.pem
	I1017 19:06:57.050463   85117 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-75534/.minikube/cert.pem, removing ...
	I1017 19:06:57.050473   85117 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-75534/.minikube/cert.pem
	I1017 19:06:57.050602   85117 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-75534/.minikube/cert.pem (1123 bytes)
	I1017 19:06:57.050772   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21753-75534/.minikube/key.pem
	I1017 19:06:57.050815   85117 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-75534/.minikube/key.pem, removing ...
	I1017 19:06:57.050825   85117 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-75534/.minikube/key.pem
	I1017 19:06:57.050881   85117 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-75534/.minikube/key.pem (1679 bytes)
	I1017 19:06:57.051013   85117 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-75534/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca-key.pem org=jenkins.functional-016863 san=[127.0.0.1 192.168.39.205 functional-016863 localhost minikube]
	I1017 19:06:57.269277   85117 provision.go:177] copyRemoteCerts
	I1017 19:06:57.269362   85117 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:06:57.269401   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:06:57.272458   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:57.272834   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:06:57.272866   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:57.273060   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:06:57.273266   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:57.273480   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:06:57.273640   85117 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/functional-016863/id_rsa Username:docker}
	I1017 19:06:57.362432   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1017 19:06:57.362511   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 19:06:57.412884   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1017 19:06:57.413107   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1017 19:06:57.450092   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1017 19:06:57.450212   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1017 19:06:57.486026   85117 provision.go:87] duration metric: took 443.605637ms to configureAuth
	I1017 19:06:57.486057   85117 buildroot.go:189] setting minikube options for container-runtime
	I1017 19:06:57.486228   85117 config.go:182] Loaded profile config "functional-016863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:06:57.486309   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:06:57.489476   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:57.489895   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:06:57.489928   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:57.490160   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:06:57.490354   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:57.490544   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:57.490703   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:06:57.490888   85117 main.go:141] libmachine: Using SSH client type: native
	I1017 19:06:57.491101   85117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1017 19:06:57.491114   85117 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:07:03.084984   85117 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:07:03.085021   85117 machine.go:96] duration metric: took 6.396234121s to provisionDockerMachine
	I1017 19:07:03.085042   85117 start.go:293] postStartSetup for "functional-016863" (driver="kvm2")
	I1017 19:07:03.085056   85117 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:07:03.085084   85117 main.go:141] libmachine: (functional-016863) Calling .DriverName
	I1017 19:07:03.085514   85117 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:07:03.085593   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:07:03.089211   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.089621   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:07:03.089655   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.089838   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:07:03.090055   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:07:03.090184   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:07:03.090354   85117 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/functional-016863/id_rsa Username:docker}
	I1017 19:07:03.173813   85117 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:07:03.179411   85117 command_runner.go:130] > NAME=Buildroot
	I1017 19:07:03.179437   85117 command_runner.go:130] > VERSION=2025.02-dirty
	I1017 19:07:03.179441   85117 command_runner.go:130] > ID=buildroot
	I1017 19:07:03.179446   85117 command_runner.go:130] > VERSION_ID=2025.02
	I1017 19:07:03.179452   85117 command_runner.go:130] > PRETTY_NAME="Buildroot 2025.02"
	I1017 19:07:03.179493   85117 info.go:137] Remote host: Buildroot 2025.02
	I1017 19:07:03.179508   85117 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-75534/.minikube/addons for local assets ...
	I1017 19:07:03.179595   85117 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-75534/.minikube/files for local assets ...
	I1017 19:07:03.179714   85117 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/ssl/certs/794392.pem -> 794392.pem in /etc/ssl/certs
	I1017 19:07:03.179729   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/ssl/certs/794392.pem -> /etc/ssl/certs/794392.pem
	I1017 19:07:03.179835   85117 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/test/nested/copy/79439/hosts -> hosts in /etc/test/nested/copy/79439
	I1017 19:07:03.179847   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/test/nested/copy/79439/hosts -> /etc/test/nested/copy/79439/hosts
	I1017 19:07:03.179893   85117 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/79439
	I1017 19:07:03.192128   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/ssl/certs/794392.pem --> /etc/ssl/certs/794392.pem (1708 bytes)
	I1017 19:07:03.223838   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/test/nested/copy/79439/hosts --> /etc/test/nested/copy/79439/hosts (40 bytes)
	I1017 19:07:03.313679   85117 start.go:296] duration metric: took 228.61978ms for postStartSetup
	I1017 19:07:03.313721   85117 fix.go:56] duration metric: took 6.643198174s for fixHost
	I1017 19:07:03.313742   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:07:03.317578   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.318077   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:07:03.318115   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.318367   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:07:03.318648   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:07:03.318838   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:07:03.319029   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:07:03.319295   85117 main.go:141] libmachine: Using SSH client type: native
	I1017 19:07:03.319597   85117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1017 19:07:03.319613   85117 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1017 19:07:03.479608   85117 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760728023.470011514
	
	I1017 19:07:03.479635   85117 fix.go:216] guest clock: 1760728023.470011514
	I1017 19:07:03.479642   85117 fix.go:229] Guest: 2025-10-17 19:07:03.470011514 +0000 UTC Remote: 2025-10-17 19:07:03.313724873 +0000 UTC m=+6.781586281 (delta=156.286641ms)
	I1017 19:07:03.479664   85117 fix.go:200] guest clock delta is within tolerance: 156.286641ms
	I1017 19:07:03.479671   85117 start.go:83] releasing machines lock for "functional-016863", held for 6.809163445s
	I1017 19:07:03.479692   85117 main.go:141] libmachine: (functional-016863) Calling .DriverName
	I1017 19:07:03.480016   85117 main.go:141] libmachine: (functional-016863) Calling .GetIP
	I1017 19:07:03.483255   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.483786   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:07:03.483830   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.484026   85117 main.go:141] libmachine: (functional-016863) Calling .DriverName
	I1017 19:07:03.484650   85117 main.go:141] libmachine: (functional-016863) Calling .DriverName
	I1017 19:07:03.484910   85117 main.go:141] libmachine: (functional-016863) Calling .DriverName
	I1017 19:07:03.485041   85117 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:07:03.485087   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:07:03.485146   85117 ssh_runner.go:195] Run: cat /version.json
	I1017 19:07:03.485170   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:07:03.488247   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.488613   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.488732   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:07:03.488760   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.488948   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:07:03.489117   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:07:03.489150   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:07:03.489166   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.489373   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:07:03.489440   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:07:03.489584   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:07:03.489660   85117 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/functional-016863/id_rsa Username:docker}
	I1017 19:07:03.489750   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:07:03.489896   85117 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/functional-016863/id_rsa Username:docker}
	I1017 19:07:03.669674   85117 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1017 19:07:03.669755   85117 command_runner.go:130] > {"iso_version": "v1.37.0-1760609724-21757", "kicbase_version": "v0.0.48-1760363564-21724", "minikube_version": "v1.37.0", "commit": "fd6729aa481bc45098452b0ed0ffbe097c29d1bb"}
	I1017 19:07:03.669885   85117 ssh_runner.go:195] Run: systemctl --version
	I1017 19:07:03.691813   85117 command_runner.go:130] > systemd 256 (256.7)
	I1017 19:07:03.691879   85117 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP -LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT -LIBARCHIVE
	I1017 19:07:03.691965   85117 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:07:03.942910   85117 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1017 19:07:03.963385   85117 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1017 19:07:03.963654   85117 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:07:03.963723   85117 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:07:04.004504   85117 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 19:07:04.004543   85117 start.go:495] detecting cgroup driver to use...
	I1017 19:07:04.004649   85117 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:07:04.048623   85117 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:07:04.093677   85117 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:07:04.093751   85117 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:07:04.125946   85117 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:07:04.177031   85117 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:07:04.556434   85117 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:07:04.871840   85117 docker.go:234] disabling docker service ...
	I1017 19:07:04.871920   85117 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:07:04.914455   85117 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:07:04.944209   85117 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:07:05.273173   85117 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:07:05.563772   85117 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:07:05.602259   85117 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:07:05.639391   85117 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1017 19:07:05.639452   85117 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:07:05.639509   85117 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:07:05.662293   85117 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 19:07:05.662360   85117 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:07:05.681766   85117 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:07:05.702415   85117 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:07:05.723309   85117 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:07:05.743334   85117 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:07:05.758794   85117 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:07:05.777348   85117 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:07:05.792297   85117 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:07:05.810337   85117 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1017 19:07:05.810427   85117 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:07:05.829378   85117 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:07:06.061473   85117 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:08:36.459335   85117 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.39776602s)
	I1017 19:08:36.459402   85117 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:08:36.459487   85117 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:08:36.466176   85117 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1017 19:08:36.466208   85117 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1017 19:08:36.466216   85117 command_runner.go:130] > Device: 0,23	Inode: 1978        Links: 1
	I1017 19:08:36.466222   85117 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1017 19:08:36.466229   85117 command_runner.go:130] > Access: 2025-10-17 19:08:36.354383352 +0000
	I1017 19:08:36.466239   85117 command_runner.go:130] > Modify: 2025-10-17 19:08:36.274379788 +0000
	I1017 19:08:36.466245   85117 command_runner.go:130] > Change: 2025-10-17 19:08:36.274379788 +0000
	I1017 19:08:36.466267   85117 command_runner.go:130] >  Birth: 2025-10-17 19:08:36.274379788 +0000
	I1017 19:08:36.466319   85117 start.go:563] Will wait 60s for crictl version
	I1017 19:08:36.466390   85117 ssh_runner.go:195] Run: which crictl
	I1017 19:08:36.470951   85117 command_runner.go:130] > /usr/bin/crictl
	I1017 19:08:36.471037   85117 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1017 19:08:36.516077   85117 command_runner.go:130] > Version:  0.1.0
	I1017 19:08:36.516101   85117 command_runner.go:130] > RuntimeName:  cri-o
	I1017 19:08:36.516106   85117 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1017 19:08:36.516111   85117 command_runner.go:130] > RuntimeApiVersion:  v1
	I1017 19:08:36.516132   85117 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1017 19:08:36.516223   85117 ssh_runner.go:195] Run: crio --version
	I1017 19:08:36.548879   85117 command_runner.go:130] > crio version 1.29.1
	I1017 19:08:36.548904   85117 command_runner.go:130] > Version:        1.29.1
	I1017 19:08:36.548909   85117 command_runner.go:130] > GitCommit:      unknown
	I1017 19:08:36.548925   85117 command_runner.go:130] > GitCommitDate:  unknown
	I1017 19:08:36.548929   85117 command_runner.go:130] > GitTreeState:   clean
	I1017 19:08:36.548935   85117 command_runner.go:130] > BuildDate:      2025-10-16T13:23:57Z
	I1017 19:08:36.548939   85117 command_runner.go:130] > GoVersion:      go1.23.4
	I1017 19:08:36.548942   85117 command_runner.go:130] > Compiler:       gc
	I1017 19:08:36.548947   85117 command_runner.go:130] > Platform:       linux/amd64
	I1017 19:08:36.548951   85117 command_runner.go:130] > Linkmode:       dynamic
	I1017 19:08:36.548955   85117 command_runner.go:130] > BuildTags:      
	I1017 19:08:36.548959   85117 command_runner.go:130] >   containers_image_ostree_stub
	I1017 19:08:36.548963   85117 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1017 19:08:36.548966   85117 command_runner.go:130] >   btrfs_noversion
	I1017 19:08:36.548970   85117 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1017 19:08:36.548974   85117 command_runner.go:130] >   libdm_no_deferred_remove
	I1017 19:08:36.548978   85117 command_runner.go:130] >   seccomp
	I1017 19:08:36.548982   85117 command_runner.go:130] > LDFlags:          unknown
	I1017 19:08:36.549001   85117 command_runner.go:130] > SeccompEnabled:   true
	I1017 19:08:36.549005   85117 command_runner.go:130] > AppArmorEnabled:  false
	I1017 19:08:36.549081   85117 ssh_runner.go:195] Run: crio --version
	I1017 19:08:36.579072   85117 command_runner.go:130] > crio version 1.29.1
	I1017 19:08:36.579097   85117 command_runner.go:130] > Version:        1.29.1
	I1017 19:08:36.579102   85117 command_runner.go:130] > GitCommit:      unknown
	I1017 19:08:36.579106   85117 command_runner.go:130] > GitCommitDate:  unknown
	I1017 19:08:36.579109   85117 command_runner.go:130] > GitTreeState:   clean
	I1017 19:08:36.579114   85117 command_runner.go:130] > BuildDate:      2025-10-16T13:23:57Z
	I1017 19:08:36.579118   85117 command_runner.go:130] > GoVersion:      go1.23.4
	I1017 19:08:36.579122   85117 command_runner.go:130] > Compiler:       gc
	I1017 19:08:36.579126   85117 command_runner.go:130] > Platform:       linux/amd64
	I1017 19:08:36.579129   85117 command_runner.go:130] > Linkmode:       dynamic
	I1017 19:08:36.579133   85117 command_runner.go:130] > BuildTags:      
	I1017 19:08:36.579137   85117 command_runner.go:130] >   containers_image_ostree_stub
	I1017 19:08:36.579141   85117 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1017 19:08:36.579144   85117 command_runner.go:130] >   btrfs_noversion
	I1017 19:08:36.579148   85117 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1017 19:08:36.579152   85117 command_runner.go:130] >   libdm_no_deferred_remove
	I1017 19:08:36.579156   85117 command_runner.go:130] >   seccomp
	I1017 19:08:36.579159   85117 command_runner.go:130] > LDFlags:          unknown
	I1017 19:08:36.579162   85117 command_runner.go:130] > SeccompEnabled:   true
	I1017 19:08:36.579166   85117 command_runner.go:130] > AppArmorEnabled:  false
	I1017 19:08:36.581921   85117 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1017 19:08:36.583156   85117 main.go:141] libmachine: (functional-016863) Calling .GetIP
	I1017 19:08:36.586303   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:08:36.586761   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:08:36.586791   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:08:36.587045   85117 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1017 19:08:36.592096   85117 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1017 19:08:36.592194   85117 kubeadm.go:883] updating cluster {Name:functional-016863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.1 ClusterName:functional-016863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 19:08:36.592323   85117 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:08:36.592384   85117 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:08:36.644213   85117 command_runner.go:130] > {
	I1017 19:08:36.644235   85117 command_runner.go:130] >   "images": [
	I1017 19:08:36.644239   85117 command_runner.go:130] >     {
	I1017 19:08:36.644246   85117 command_runner.go:130] >       "id": "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1017 19:08:36.644251   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.644257   85117 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1017 19:08:36.644260   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644265   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.644287   85117 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1017 19:08:36.644298   85117 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1017 19:08:36.644304   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644310   85117 command_runner.go:130] >       "size": "109379124",
	I1017 19:08:36.644319   85117 command_runner.go:130] >       "uid": null,
	I1017 19:08:36.644328   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.644357   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.644368   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.644379   85117 command_runner.go:130] >     },
	I1017 19:08:36.644384   85117 command_runner.go:130] >     {
	I1017 19:08:36.644397   85117 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1017 19:08:36.644403   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.644412   85117 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1017 19:08:36.644418   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644429   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.644441   85117 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1017 19:08:36.644455   85117 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1017 19:08:36.644463   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644489   85117 command_runner.go:130] >       "size": "31470524",
	I1017 19:08:36.644500   85117 command_runner.go:130] >       "uid": null,
	I1017 19:08:36.644506   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.644517   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.644524   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.644532   85117 command_runner.go:130] >     },
	I1017 19:08:36.644537   85117 command_runner.go:130] >     {
	I1017 19:08:36.644546   85117 command_runner.go:130] >       "id": "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1017 19:08:36.644570   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.644577   85117 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1017 19:08:36.644586   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644592   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.644602   85117 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1017 19:08:36.644610   85117 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1017 19:08:36.644616   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644620   85117 command_runner.go:130] >       "size": "76103547",
	I1017 19:08:36.644623   85117 command_runner.go:130] >       "uid": null,
	I1017 19:08:36.644628   85117 command_runner.go:130] >       "username": "nonroot",
	I1017 19:08:36.644634   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.644638   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.644644   85117 command_runner.go:130] >     },
	I1017 19:08:36.644655   85117 command_runner.go:130] >     {
	I1017 19:08:36.644664   85117 command_runner.go:130] >       "id": "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1017 19:08:36.644668   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.644675   85117 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1017 19:08:36.644678   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644685   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.644692   85117 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1017 19:08:36.644707   85117 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1017 19:08:36.644713   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644716   85117 command_runner.go:130] >       "size": "195976448",
	I1017 19:08:36.644720   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.644726   85117 command_runner.go:130] >         "value": "0"
	I1017 19:08:36.644729   85117 command_runner.go:130] >       },
	I1017 19:08:36.644733   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.644737   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.644741   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.644744   85117 command_runner.go:130] >     },
	I1017 19:08:36.644747   85117 command_runner.go:130] >     {
	I1017 19:08:36.644753   85117 command_runner.go:130] >       "id": "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1017 19:08:36.644760   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.644764   85117 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1017 19:08:36.644767   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644772   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.644781   85117 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1017 19:08:36.644788   85117 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1017 19:08:36.644794   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644798   85117 command_runner.go:130] >       "size": "89046001",
	I1017 19:08:36.644802   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.644806   85117 command_runner.go:130] >         "value": "0"
	I1017 19:08:36.644810   85117 command_runner.go:130] >       },
	I1017 19:08:36.644813   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.644819   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.644822   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.644830   85117 command_runner.go:130] >     },
	I1017 19:08:36.644836   85117 command_runner.go:130] >     {
	I1017 19:08:36.644842   85117 command_runner.go:130] >       "id": "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1017 19:08:36.644845   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.644850   85117 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1017 19:08:36.644856   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644860   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.644868   85117 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1017 19:08:36.644877   85117 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1017 19:08:36.644880   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644884   85117 command_runner.go:130] >       "size": "76004181",
	I1017 19:08:36.644888   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.644892   85117 command_runner.go:130] >         "value": "0"
	I1017 19:08:36.644895   85117 command_runner.go:130] >       },
	I1017 19:08:36.644899   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.644902   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.644908   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.644911   85117 command_runner.go:130] >     },
	I1017 19:08:36.644914   85117 command_runner.go:130] >     {
	I1017 19:08:36.644920   85117 command_runner.go:130] >       "id": "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1017 19:08:36.644924   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.644928   85117 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1017 19:08:36.644932   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644944   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.644951   85117 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1017 19:08:36.644958   85117 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1017 19:08:36.644961   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644964   85117 command_runner.go:130] >       "size": "73138073",
	I1017 19:08:36.644968   85117 command_runner.go:130] >       "uid": null,
	I1017 19:08:36.644972   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.644975   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.644979   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.644982   85117 command_runner.go:130] >     },
	I1017 19:08:36.644991   85117 command_runner.go:130] >     {
	I1017 19:08:36.644999   85117 command_runner.go:130] >       "id": "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1017 19:08:36.645003   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.645010   85117 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1017 19:08:36.645013   85117 command_runner.go:130] >       ],
	I1017 19:08:36.645017   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.645041   85117 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1017 19:08:36.645052   85117 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1017 19:08:36.645055   85117 command_runner.go:130] >       ],
	I1017 19:08:36.645059   85117 command_runner.go:130] >       "size": "53844823",
	I1017 19:08:36.645062   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.645066   85117 command_runner.go:130] >         "value": "0"
	I1017 19:08:36.645068   85117 command_runner.go:130] >       },
	I1017 19:08:36.645072   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.645075   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.645079   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.645081   85117 command_runner.go:130] >     },
	I1017 19:08:36.645084   85117 command_runner.go:130] >     {
	I1017 19:08:36.645090   85117 command_runner.go:130] >       "id": "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1017 19:08:36.645093   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.645097   85117 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1017 19:08:36.645100   85117 command_runner.go:130] >       ],
	I1017 19:08:36.645104   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.645110   85117 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1017 19:08:36.645116   85117 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1017 19:08:36.645120   85117 command_runner.go:130] >       ],
	I1017 19:08:36.645123   85117 command_runner.go:130] >       "size": "742092",
	I1017 19:08:36.645126   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.645129   85117 command_runner.go:130] >         "value": "65535"
	I1017 19:08:36.645132   85117 command_runner.go:130] >       },
	I1017 19:08:36.645136   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.645143   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.645147   85117 command_runner.go:130] >       "pinned": true
	I1017 19:08:36.645154   85117 command_runner.go:130] >     }
	I1017 19:08:36.645157   85117 command_runner.go:130] >   ]
	I1017 19:08:36.645160   85117 command_runner.go:130] > }
	I1017 19:08:36.645398   85117 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:08:36.645415   85117 crio.go:433] Images already preloaded, skipping extraction
	I1017 19:08:36.645478   85117 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:08:36.684800   85117 command_runner.go:130] > {
	I1017 19:08:36.684832   85117 command_runner.go:130] >   "images": [
	I1017 19:08:36.684855   85117 command_runner.go:130] >     {
	I1017 19:08:36.684869   85117 command_runner.go:130] >       "id": "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1017 19:08:36.684877   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.684887   85117 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1017 19:08:36.684892   85117 command_runner.go:130] >       ],
	I1017 19:08:36.684896   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.684909   85117 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1017 19:08:36.684916   85117 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1017 19:08:36.684919   85117 command_runner.go:130] >       ],
	I1017 19:08:36.684923   85117 command_runner.go:130] >       "size": "109379124",
	I1017 19:08:36.684927   85117 command_runner.go:130] >       "uid": null,
	I1017 19:08:36.684930   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.684935   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.684938   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.684942   85117 command_runner.go:130] >     },
	I1017 19:08:36.684945   85117 command_runner.go:130] >     {
	I1017 19:08:36.684950   85117 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1017 19:08:36.684955   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.684960   85117 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1017 19:08:36.684973   85117 command_runner.go:130] >       ],
	I1017 19:08:36.684980   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.684994   85117 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1017 19:08:36.685002   85117 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1017 19:08:36.685005   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685013   85117 command_runner.go:130] >       "size": "31470524",
	I1017 19:08:36.685018   85117 command_runner.go:130] >       "uid": null,
	I1017 19:08:36.685021   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.685025   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.685029   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.685032   85117 command_runner.go:130] >     },
	I1017 19:08:36.685035   85117 command_runner.go:130] >     {
	I1017 19:08:36.685041   85117 command_runner.go:130] >       "id": "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1017 19:08:36.685045   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.685055   85117 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1017 19:08:36.685061   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685064   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.685072   85117 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1017 19:08:36.685081   85117 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1017 19:08:36.685084   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685088   85117 command_runner.go:130] >       "size": "76103547",
	I1017 19:08:36.685092   85117 command_runner.go:130] >       "uid": null,
	I1017 19:08:36.685095   85117 command_runner.go:130] >       "username": "nonroot",
	I1017 19:08:36.685098   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.685105   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.685108   85117 command_runner.go:130] >     },
	I1017 19:08:36.685111   85117 command_runner.go:130] >     {
	I1017 19:08:36.685116   85117 command_runner.go:130] >       "id": "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1017 19:08:36.685121   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.685125   85117 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1017 19:08:36.685128   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685132   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.685140   85117 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1017 19:08:36.685152   85117 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1017 19:08:36.685158   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685162   85117 command_runner.go:130] >       "size": "195976448",
	I1017 19:08:36.685165   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.685169   85117 command_runner.go:130] >         "value": "0"
	I1017 19:08:36.685172   85117 command_runner.go:130] >       },
	I1017 19:08:36.685176   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.685179   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.685183   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.685186   85117 command_runner.go:130] >     },
	I1017 19:08:36.685195   85117 command_runner.go:130] >     {
	I1017 19:08:36.685202   85117 command_runner.go:130] >       "id": "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1017 19:08:36.685205   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.685209   85117 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1017 19:08:36.685217   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685224   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.685230   85117 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1017 19:08:36.685243   85117 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1017 19:08:36.685249   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685252   85117 command_runner.go:130] >       "size": "89046001",
	I1017 19:08:36.685256   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.685259   85117 command_runner.go:130] >         "value": "0"
	I1017 19:08:36.685263   85117 command_runner.go:130] >       },
	I1017 19:08:36.685266   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.685270   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.685274   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.685277   85117 command_runner.go:130] >     },
	I1017 19:08:36.685280   85117 command_runner.go:130] >     {
	I1017 19:08:36.685292   85117 command_runner.go:130] >       "id": "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1017 19:08:36.685301   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.685310   85117 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1017 19:08:36.685322   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685332   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.685344   85117 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1017 19:08:36.685361   85117 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1017 19:08:36.685371   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685378   85117 command_runner.go:130] >       "size": "76004181",
	I1017 19:08:36.685388   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.685394   85117 command_runner.go:130] >         "value": "0"
	I1017 19:08:36.685403   85117 command_runner.go:130] >       },
	I1017 19:08:36.685407   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.685414   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.685418   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.685421   85117 command_runner.go:130] >     },
	I1017 19:08:36.685424   85117 command_runner.go:130] >     {
	I1017 19:08:36.685430   85117 command_runner.go:130] >       "id": "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1017 19:08:36.685437   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.685448   85117 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1017 19:08:36.685454   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685457   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.685464   85117 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1017 19:08:36.685473   85117 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1017 19:08:36.685476   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685483   85117 command_runner.go:130] >       "size": "73138073",
	I1017 19:08:36.685487   85117 command_runner.go:130] >       "uid": null,
	I1017 19:08:36.685491   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.685495   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.685498   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.685502   85117 command_runner.go:130] >     },
	I1017 19:08:36.685505   85117 command_runner.go:130] >     {
	I1017 19:08:36.685511   85117 command_runner.go:130] >       "id": "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1017 19:08:36.685517   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.685522   85117 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1017 19:08:36.685528   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685531   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.685577   85117 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1017 19:08:36.685591   85117 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1017 19:08:36.685594   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685598   85117 command_runner.go:130] >       "size": "53844823",
	I1017 19:08:36.685601   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.685604   85117 command_runner.go:130] >         "value": "0"
	I1017 19:08:36.685607   85117 command_runner.go:130] >       },
	I1017 19:08:36.685611   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.685614   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.685618   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.685621   85117 command_runner.go:130] >     },
	I1017 19:08:36.685624   85117 command_runner.go:130] >     {
	I1017 19:08:36.685629   85117 command_runner.go:130] >       "id": "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1017 19:08:36.685638   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.685642   85117 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1017 19:08:36.685651   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685658   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.685664   85117 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1017 19:08:36.685673   85117 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1017 19:08:36.685677   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685680   85117 command_runner.go:130] >       "size": "742092",
	I1017 19:08:36.685684   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.685688   85117 command_runner.go:130] >         "value": "65535"
	I1017 19:08:36.685691   85117 command_runner.go:130] >       },
	I1017 19:08:36.685697   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.685700   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.685703   85117 command_runner.go:130] >       "pinned": true
	I1017 19:08:36.685706   85117 command_runner.go:130] >     }
	I1017 19:08:36.685711   85117 command_runner.go:130] >   ]
	I1017 19:08:36.685714   85117 command_runner.go:130] > }
	I1017 19:08:36.685822   85117 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:08:36.685834   85117 cache_images.go:85] Images are preloaded, skipping loading
	I1017 19:08:36.685842   85117 kubeadm.go:934] updating node { 192.168.39.205 8441 v1.34.1 crio true true} ...
	I1017 19:08:36.685955   85117 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-016863 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.205
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-016863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:08:36.686028   85117 ssh_runner.go:195] Run: crio config
	I1017 19:08:36.721698   85117 command_runner.go:130] ! time="2025-10-17 19:08:36.711815300Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1017 19:08:36.726934   85117 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1017 19:08:36.733071   85117 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1017 19:08:36.733099   85117 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1017 19:08:36.733109   85117 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1017 19:08:36.733113   85117 command_runner.go:130] > #
	I1017 19:08:36.733123   85117 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1017 19:08:36.733131   85117 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1017 19:08:36.733140   85117 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1017 19:08:36.733156   85117 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1017 19:08:36.733165   85117 command_runner.go:130] > # reload'.
	I1017 19:08:36.733177   85117 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1017 19:08:36.733189   85117 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1017 19:08:36.733199   85117 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1017 19:08:36.733209   85117 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1017 19:08:36.733222   85117 command_runner.go:130] > [crio]
	I1017 19:08:36.733230   85117 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1017 19:08:36.733234   85117 command_runner.go:130] > # containers images, in this directory.
	I1017 19:08:36.733241   85117 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1017 19:08:36.733256   85117 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1017 19:08:36.733263   85117 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1017 19:08:36.733270   85117 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1017 19:08:36.733277   85117 command_runner.go:130] > # imagestore = ""
	I1017 19:08:36.733283   85117 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1017 19:08:36.733291   85117 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1017 19:08:36.733296   85117 command_runner.go:130] > # storage_driver = "overlay"
	I1017 19:08:36.733307   85117 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1017 19:08:36.733320   85117 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1017 19:08:36.733327   85117 command_runner.go:130] > storage_option = [
	I1017 19:08:36.733337   85117 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1017 19:08:36.733342   85117 command_runner.go:130] > ]
	I1017 19:08:36.733354   85117 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1017 19:08:36.733363   85117 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1017 19:08:36.733368   85117 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1017 19:08:36.733374   85117 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1017 19:08:36.733380   85117 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1017 19:08:36.733387   85117 command_runner.go:130] > # always happen on a node reboot
	I1017 19:08:36.733391   85117 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1017 19:08:36.733411   85117 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1017 19:08:36.733424   85117 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1017 19:08:36.733432   85117 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1017 19:08:36.733443   85117 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1017 19:08:36.733456   85117 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1017 19:08:36.733470   85117 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1017 19:08:36.733480   85117 command_runner.go:130] > # internal_wipe = true
	I1017 19:08:36.733489   85117 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1017 19:08:36.733497   85117 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1017 19:08:36.733504   85117 command_runner.go:130] > # internal_repair = false
	I1017 19:08:36.733522   85117 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1017 19:08:36.733534   85117 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1017 19:08:36.733544   85117 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1017 19:08:36.733565   85117 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1017 19:08:36.733582   85117 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1017 19:08:36.733590   85117 command_runner.go:130] > [crio.api]
	I1017 19:08:36.733598   85117 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1017 19:08:36.733608   85117 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1017 19:08:36.733616   85117 command_runner.go:130] > # IP address on which the stream server will listen.
	I1017 19:08:36.733626   85117 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1017 19:08:36.733636   85117 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1017 19:08:36.733647   85117 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1017 19:08:36.733653   85117 command_runner.go:130] > # stream_port = "0"
	I1017 19:08:36.733665   85117 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1017 19:08:36.733671   85117 command_runner.go:130] > # stream_enable_tls = false
	I1017 19:08:36.733683   85117 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1017 19:08:36.733692   85117 command_runner.go:130] > # stream_idle_timeout = ""
	I1017 19:08:36.733699   85117 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1017 19:08:36.733709   85117 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1017 19:08:36.733719   85117 command_runner.go:130] > # minutes.
	I1017 19:08:36.733729   85117 command_runner.go:130] > # stream_tls_cert = ""
	I1017 19:08:36.733738   85117 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1017 19:08:36.733749   85117 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1017 19:08:36.733755   85117 command_runner.go:130] > # stream_tls_key = ""
	I1017 19:08:36.733767   85117 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1017 19:08:36.733777   85117 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1017 19:08:36.733807   85117 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1017 19:08:36.733817   85117 command_runner.go:130] > # stream_tls_ca = ""
	I1017 19:08:36.733828   85117 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1017 19:08:36.733839   85117 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1017 19:08:36.733850   85117 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1017 19:08:36.733860   85117 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1017 19:08:36.733870   85117 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1017 19:08:36.733888   85117 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1017 19:08:36.733894   85117 command_runner.go:130] > [crio.runtime]
	I1017 19:08:36.733902   85117 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1017 19:08:36.733914   85117 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1017 19:08:36.733923   85117 command_runner.go:130] > # "nofile=1024:2048"
	I1017 19:08:36.733936   85117 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1017 19:08:36.733945   85117 command_runner.go:130] > # default_ulimits = [
	I1017 19:08:36.733950   85117 command_runner.go:130] > # ]
	I1017 19:08:36.733961   85117 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1017 19:08:36.733966   85117 command_runner.go:130] > # no_pivot = false
	I1017 19:08:36.733974   85117 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1017 19:08:36.733984   85117 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1017 19:08:36.733990   85117 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1017 19:08:36.734005   85117 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1017 19:08:36.734017   85117 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1017 19:08:36.734041   85117 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1017 19:08:36.734050   85117 command_runner.go:130] > conmon = "/usr/bin/conmon"
	I1017 19:08:36.734057   85117 command_runner.go:130] > # Cgroup setting for conmon
	I1017 19:08:36.734070   85117 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1017 19:08:36.734079   85117 command_runner.go:130] > conmon_cgroup = "pod"
	I1017 19:08:36.734085   85117 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1017 19:08:36.734096   85117 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1017 19:08:36.734105   85117 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1017 19:08:36.734115   85117 command_runner.go:130] > conmon_env = [
	I1017 19:08:36.734124   85117 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1017 19:08:36.734133   85117 command_runner.go:130] > ]
	I1017 19:08:36.734142   85117 command_runner.go:130] > # Additional environment variables to set for all the
	I1017 19:08:36.734152   85117 command_runner.go:130] > # containers. These are overridden if set in the
	I1017 19:08:36.734161   85117 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1017 19:08:36.734170   85117 command_runner.go:130] > # default_env = [
	I1017 19:08:36.734175   85117 command_runner.go:130] > # ]
	I1017 19:08:36.734186   85117 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1017 19:08:36.734193   85117 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1017 19:08:36.734374   85117 command_runner.go:130] > # selinux = false
	I1017 19:08:36.734484   85117 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1017 19:08:36.734495   85117 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1017 19:08:36.734505   85117 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1017 19:08:36.734516   85117 command_runner.go:130] > # seccomp_profile = ""
	I1017 19:08:36.734531   85117 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1017 19:08:36.734543   85117 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1017 19:08:36.734567   85117 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1017 19:08:36.734585   85117 command_runner.go:130] > # which might increase security.
	I1017 19:08:36.734593   85117 command_runner.go:130] > # This option is currently deprecated,
	I1017 19:08:36.734610   85117 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1017 19:08:36.734624   85117 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1017 19:08:36.734634   85117 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1017 19:08:36.734646   85117 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1017 19:08:36.734697   85117 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1017 19:08:36.735591   85117 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1017 19:08:36.735609   85117 command_runner.go:130] > # This option supports live configuration reload.
	I1017 19:08:36.735623   85117 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1017 19:08:36.735636   85117 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1017 19:08:36.735643   85117 command_runner.go:130] > # the cgroup blockio controller.
	I1017 19:08:36.735656   85117 command_runner.go:130] > # blockio_config_file = ""
	I1017 19:08:36.735670   85117 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1017 19:08:36.735675   85117 command_runner.go:130] > # blockio parameters.
	I1017 19:08:36.735681   85117 command_runner.go:130] > # blockio_reload = false
	I1017 19:08:36.735706   85117 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1017 19:08:36.735733   85117 command_runner.go:130] > # irqbalance daemon.
	I1017 19:08:36.735812   85117 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1017 19:08:36.735833   85117 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1017 19:08:36.736170   85117 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1017 19:08:36.736193   85117 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1017 19:08:36.736203   85117 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1017 19:08:36.736229   85117 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1017 19:08:36.736240   85117 command_runner.go:130] > # This option supports live configuration reload.
	I1017 19:08:36.736246   85117 command_runner.go:130] > # rdt_config_file = ""
	I1017 19:08:36.736258   85117 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1017 19:08:36.736268   85117 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1017 19:08:36.736300   85117 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1017 19:08:36.736312   85117 command_runner.go:130] > # separate_pull_cgroup = ""
	I1017 19:08:36.736321   85117 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1017 19:08:36.736329   85117 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1017 19:08:36.736335   85117 command_runner.go:130] > # will be added.
	I1017 19:08:36.736341   85117 command_runner.go:130] > # default_capabilities = [
	I1017 19:08:36.736349   85117 command_runner.go:130] > # 	"CHOWN",
	I1017 19:08:36.736355   85117 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1017 19:08:36.736360   85117 command_runner.go:130] > # 	"FSETID",
	I1017 19:08:36.736366   85117 command_runner.go:130] > # 	"FOWNER",
	I1017 19:08:36.736374   85117 command_runner.go:130] > # 	"SETGID",
	I1017 19:08:36.736379   85117 command_runner.go:130] > # 	"SETUID",
	I1017 19:08:36.736384   85117 command_runner.go:130] > # 	"SETPCAP",
	I1017 19:08:36.736392   85117 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1017 19:08:36.736401   85117 command_runner.go:130] > # 	"KILL",
	I1017 19:08:36.736409   85117 command_runner.go:130] > # ]
	I1017 19:08:36.736420   85117 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1017 19:08:36.736433   85117 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1017 19:08:36.736444   85117 command_runner.go:130] > # add_inheritable_capabilities = false
	I1017 19:08:36.736452   85117 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1017 19:08:36.736463   85117 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1017 19:08:36.736472   85117 command_runner.go:130] > default_sysctls = [
	I1017 19:08:36.736482   85117 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1017 19:08:36.736490   85117 command_runner.go:130] > ]
	I1017 19:08:36.736501   85117 command_runner.go:130] > # List of devices on the host that a
	I1017 19:08:36.736513   85117 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1017 19:08:36.736521   85117 command_runner.go:130] > # allowed_devices = [
	I1017 19:08:36.736526   85117 command_runner.go:130] > # 	"/dev/fuse",
	I1017 19:08:36.736534   85117 command_runner.go:130] > # ]
	I1017 19:08:36.736541   85117 command_runner.go:130] > # List of additional devices. specified as
	I1017 19:08:36.736569   85117 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1017 19:08:36.736580   85117 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1017 19:08:36.736589   85117 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1017 19:08:36.736598   85117 command_runner.go:130] > # additional_devices = [
	I1017 19:08:36.736602   85117 command_runner.go:130] > # ]
	I1017 19:08:36.736612   85117 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1017 19:08:36.736621   85117 command_runner.go:130] > # cdi_spec_dirs = [
	I1017 19:08:36.736627   85117 command_runner.go:130] > # 	"/etc/cdi",
	I1017 19:08:36.736635   85117 command_runner.go:130] > # 	"/var/run/cdi",
	I1017 19:08:36.736640   85117 command_runner.go:130] > # ]
	I1017 19:08:36.736652   85117 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1017 19:08:36.736664   85117 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1017 19:08:36.736673   85117 command_runner.go:130] > # Defaults to false.
	I1017 19:08:36.736684   85117 command_runner.go:130] > # device_ownership_from_security_context = false
	I1017 19:08:36.736696   85117 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1017 19:08:36.736707   85117 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1017 19:08:36.736715   85117 command_runner.go:130] > # hooks_dir = [
	I1017 19:08:36.736723   85117 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1017 19:08:36.736732   85117 command_runner.go:130] > # ]
	I1017 19:08:36.736744   85117 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1017 19:08:36.736756   85117 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1017 19:08:36.736767   85117 command_runner.go:130] > # its default mounts from the following two files:
	I1017 19:08:36.736774   85117 command_runner.go:130] > #
	I1017 19:08:36.736783   85117 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1017 19:08:36.736795   85117 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1017 19:08:36.736809   85117 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1017 19:08:36.736817   85117 command_runner.go:130] > #
	I1017 19:08:36.736826   85117 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1017 19:08:36.736838   85117 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1017 19:08:36.736850   85117 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1017 19:08:36.736858   85117 command_runner.go:130] > #      only add mounts it finds in this file.
	I1017 19:08:36.736865   85117 command_runner.go:130] > #
	I1017 19:08:36.736871   85117 command_runner.go:130] > # default_mounts_file = ""
	I1017 19:08:36.736882   85117 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1017 19:08:36.736894   85117 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1017 19:08:36.736914   85117 command_runner.go:130] > pids_limit = 1024
	I1017 19:08:36.736938   85117 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1017 19:08:36.736957   85117 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1017 19:08:36.736976   85117 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1017 19:08:36.737004   85117 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1017 19:08:36.737015   85117 command_runner.go:130] > # log_size_max = -1
	I1017 19:08:36.737028   85117 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1017 19:08:36.737037   85117 command_runner.go:130] > # log_to_journald = false
	I1017 19:08:36.737051   85117 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1017 19:08:36.737062   85117 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1017 19:08:36.737073   85117 command_runner.go:130] > # Path to directory for container attach sockets.
	I1017 19:08:36.737084   85117 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1017 19:08:36.737094   85117 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1017 19:08:36.737102   85117 command_runner.go:130] > # bind_mount_prefix = ""
	I1017 19:08:36.737107   85117 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1017 19:08:36.737113   85117 command_runner.go:130] > # read_only = false
	I1017 19:08:36.737122   85117 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1017 19:08:36.737131   85117 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1017 19:08:36.737137   85117 command_runner.go:130] > # live configuration reload.
	I1017 19:08:36.737141   85117 command_runner.go:130] > # log_level = "info"
	I1017 19:08:36.737149   85117 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1017 19:08:36.737153   85117 command_runner.go:130] > # This option supports live configuration reload.
	I1017 19:08:36.737159   85117 command_runner.go:130] > # log_filter = ""
	I1017 19:08:36.737165   85117 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1017 19:08:36.737175   85117 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1017 19:08:36.737181   85117 command_runner.go:130] > # separated by comma.
	I1017 19:08:36.737189   85117 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1017 19:08:36.737199   85117 command_runner.go:130] > # uid_mappings = ""
	I1017 19:08:36.737214   85117 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1017 19:08:36.737222   85117 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1017 19:08:36.737227   85117 command_runner.go:130] > # separated by comma.
	I1017 19:08:36.737234   85117 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1017 19:08:36.737238   85117 command_runner.go:130] > # gid_mappings = ""
	I1017 19:08:36.737244   85117 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1017 19:08:36.737252   85117 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1017 19:08:36.737258   85117 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1017 19:08:36.737268   85117 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1017 19:08:36.737274   85117 command_runner.go:130] > # minimum_mappable_uid = -1
	I1017 19:08:36.737280   85117 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1017 19:08:36.737285   85117 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1017 19:08:36.737293   85117 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1017 19:08:36.737301   85117 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1017 19:08:36.737306   85117 command_runner.go:130] > # minimum_mappable_gid = -1
	I1017 19:08:36.737312   85117 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1017 19:08:36.737318   85117 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1017 19:08:36.737326   85117 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1017 19:08:36.737330   85117 command_runner.go:130] > # ctr_stop_timeout = 30
	I1017 19:08:36.737335   85117 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1017 19:08:36.737343   85117 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1017 19:08:36.737349   85117 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1017 19:08:36.737354   85117 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1017 19:08:36.737360   85117 command_runner.go:130] > drop_infra_ctr = false
	I1017 19:08:36.737365   85117 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1017 19:08:36.737370   85117 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1017 19:08:36.737377   85117 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1017 19:08:36.737382   85117 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1017 19:08:36.737388   85117 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1017 19:08:36.737396   85117 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1017 19:08:36.737402   85117 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1017 19:08:36.737409   85117 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1017 19:08:36.737412   85117 command_runner.go:130] > # shared_cpuset = ""
	I1017 19:08:36.737421   85117 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1017 19:08:36.737428   85117 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1017 19:08:36.737434   85117 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1017 19:08:36.737441   85117 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1017 19:08:36.737447   85117 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1017 19:08:36.737452   85117 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1017 19:08:36.737460   85117 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1017 19:08:36.737464   85117 command_runner.go:130] > # enable_criu_support = false
	I1017 19:08:36.737471   85117 command_runner.go:130] > # Enable/disable the generation of the container,
	I1017 19:08:36.737477   85117 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1017 19:08:36.737484   85117 command_runner.go:130] > # enable_pod_events = false
	I1017 19:08:36.737490   85117 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1017 19:08:36.737499   85117 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1017 19:08:36.737507   85117 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1017 19:08:36.737510   85117 command_runner.go:130] > # default_runtime = "runc"
	I1017 19:08:36.737518   85117 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1017 19:08:36.737525   85117 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1017 19:08:36.737537   85117 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1017 19:08:36.737545   85117 command_runner.go:130] > # creation as a file is not desired either.
	I1017 19:08:36.737567   85117 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1017 19:08:36.737578   85117 command_runner.go:130] > # the hostname is being managed dynamically.
	I1017 19:08:36.737585   85117 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1017 19:08:36.737590   85117 command_runner.go:130] > # ]
	I1017 19:08:36.737597   85117 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1017 19:08:36.737605   85117 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1017 19:08:36.737613   85117 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1017 19:08:36.737618   85117 command_runner.go:130] > # Each entry in the table should follow the format:
	I1017 19:08:36.737623   85117 command_runner.go:130] > #
	I1017 19:08:36.737628   85117 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1017 19:08:36.737635   85117 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1017 19:08:36.737639   85117 command_runner.go:130] > # runtime_type = "oci"
	I1017 19:08:36.737698   85117 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1017 19:08:36.737709   85117 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1017 19:08:36.737719   85117 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1017 19:08:36.737725   85117 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1017 19:08:36.737735   85117 command_runner.go:130] > # monitor_env = []
	I1017 19:08:36.737744   85117 command_runner.go:130] > # privileged_without_host_devices = false
	I1017 19:08:36.737748   85117 command_runner.go:130] > # allowed_annotations = []
	I1017 19:08:36.737754   85117 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1017 19:08:36.737763   85117 command_runner.go:130] > # Where:
	I1017 19:08:36.737771   85117 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1017 19:08:36.737778   85117 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1017 19:08:36.737786   85117 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1017 19:08:36.737794   85117 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1017 19:08:36.737798   85117 command_runner.go:130] > #   in $PATH.
	I1017 19:08:36.737803   85117 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1017 19:08:36.737810   85117 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1017 19:08:36.737816   85117 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1017 19:08:36.737821   85117 command_runner.go:130] > #   state.
	I1017 19:08:36.737828   85117 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1017 19:08:36.737836   85117 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1017 19:08:36.737842   85117 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1017 19:08:36.737849   85117 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1017 19:08:36.737856   85117 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1017 19:08:36.737865   85117 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1017 19:08:36.737872   85117 command_runner.go:130] > #   The currently recognized values are:
	I1017 19:08:36.737878   85117 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1017 19:08:36.737892   85117 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1017 19:08:36.737900   85117 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1017 19:08:36.737906   85117 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1017 19:08:36.737916   85117 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1017 19:08:36.737925   85117 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1017 19:08:36.737935   85117 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1017 19:08:36.737943   85117 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1017 19:08:36.737951   85117 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1017 19:08:36.737958   85117 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1017 19:08:36.737966   85117 command_runner.go:130] > #   deprecated option "conmon".
	I1017 19:08:36.737973   85117 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1017 19:08:36.737981   85117 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1017 19:08:36.737987   85117 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1017 19:08:36.737995   85117 command_runner.go:130] > #   should be moved to the container's cgroup
	I1017 19:08:36.738001   85117 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1017 19:08:36.738010   85117 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1017 19:08:36.738019   85117 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1017 19:08:36.738027   85117 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1017 19:08:36.738030   85117 command_runner.go:130] > #
	I1017 19:08:36.738038   85117 command_runner.go:130] > # Using the seccomp notifier feature:
	I1017 19:08:36.738041   85117 command_runner.go:130] > #
	I1017 19:08:36.738046   85117 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1017 19:08:36.738055   85117 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1017 19:08:36.738060   85117 command_runner.go:130] > #
	I1017 19:08:36.738067   85117 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1017 19:08:36.738075   85117 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1017 19:08:36.738080   85117 command_runner.go:130] > #
	I1017 19:08:36.738086   85117 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1017 19:08:36.738090   85117 command_runner.go:130] > # feature.
	I1017 19:08:36.738092   85117 command_runner.go:130] > #
	I1017 19:08:36.738100   85117 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1017 19:08:36.738108   85117 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1017 19:08:36.738114   85117 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1017 19:08:36.738123   85117 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1017 19:08:36.738132   85117 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1017 19:08:36.738137   85117 command_runner.go:130] > #
	I1017 19:08:36.738143   85117 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1017 19:08:36.738151   85117 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1017 19:08:36.738156   85117 command_runner.go:130] > #
	I1017 19:08:36.738162   85117 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1017 19:08:36.738169   85117 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1017 19:08:36.738172   85117 command_runner.go:130] > #
	I1017 19:08:36.738178   85117 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1017 19:08:36.738186   85117 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1017 19:08:36.738190   85117 command_runner.go:130] > # limitation.
	I1017 19:08:36.738198   85117 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1017 19:08:36.738202   85117 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1017 19:08:36.738212   85117 command_runner.go:130] > runtime_type = "oci"
	I1017 19:08:36.738218   85117 command_runner.go:130] > runtime_root = "/run/runc"
	I1017 19:08:36.738222   85117 command_runner.go:130] > runtime_config_path = ""
	I1017 19:08:36.738228   85117 command_runner.go:130] > monitor_path = "/usr/bin/conmon"
	I1017 19:08:36.738233   85117 command_runner.go:130] > monitor_cgroup = "pod"
	I1017 19:08:36.738239   85117 command_runner.go:130] > monitor_exec_cgroup = ""
	I1017 19:08:36.738242   85117 command_runner.go:130] > monitor_env = [
	I1017 19:08:36.738250   85117 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1017 19:08:36.738253   85117 command_runner.go:130] > ]
	I1017 19:08:36.738258   85117 command_runner.go:130] > privileged_without_host_devices = false
	I1017 19:08:36.738270   85117 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1017 19:08:36.738277   85117 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1017 19:08:36.738283   85117 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1017 19:08:36.738302   85117 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1017 19:08:36.738315   85117 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1017 19:08:36.738320   85117 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1017 19:08:36.738331   85117 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1017 19:08:36.738339   85117 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1017 19:08:36.738347   85117 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1017 19:08:36.738354   85117 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1017 19:08:36.738359   85117 command_runner.go:130] > # Example:
	I1017 19:08:36.738364   85117 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1017 19:08:36.738368   85117 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1017 19:08:36.738373   85117 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1017 19:08:36.738378   85117 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1017 19:08:36.738381   85117 command_runner.go:130] > # cpuset = 0
	I1017 19:08:36.738384   85117 command_runner.go:130] > # cpushares = "0-1"
	I1017 19:08:36.738388   85117 command_runner.go:130] > # Where:
	I1017 19:08:36.738392   85117 command_runner.go:130] > # The workload name is workload-type.
	I1017 19:08:36.738399   85117 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1017 19:08:36.738406   85117 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1017 19:08:36.738411   85117 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1017 19:08:36.738419   85117 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1017 19:08:36.738427   85117 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1017 19:08:36.738431   85117 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1017 19:08:36.738437   85117 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1017 19:08:36.738443   85117 command_runner.go:130] > # Default value is set to true
	I1017 19:08:36.738447   85117 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1017 19:08:36.738454   85117 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1017 19:08:36.738459   85117 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1017 19:08:36.738465   85117 command_runner.go:130] > # Default value is set to 'false'
	I1017 19:08:36.738470   85117 command_runner.go:130] > # disable_hostport_mapping = false
	I1017 19:08:36.738478   85117 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1017 19:08:36.738484   85117 command_runner.go:130] > #
	I1017 19:08:36.738489   85117 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1017 19:08:36.738500   85117 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1017 19:08:36.738508   85117 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1017 19:08:36.738517   85117 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1017 19:08:36.738522   85117 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1017 19:08:36.738529   85117 command_runner.go:130] > [crio.image]
	I1017 19:08:36.738535   85117 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1017 19:08:36.738541   85117 command_runner.go:130] > # default_transport = "docker://"
	I1017 19:08:36.738547   85117 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1017 19:08:36.738573   85117 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1017 19:08:36.738580   85117 command_runner.go:130] > # global_auth_file = ""
	I1017 19:08:36.738589   85117 command_runner.go:130] > # The image used to instantiate infra containers.
	I1017 19:08:36.738594   85117 command_runner.go:130] > # This option supports live configuration reload.
	I1017 19:08:36.738601   85117 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10.1"
	I1017 19:08:36.738608   85117 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1017 19:08:36.738616   85117 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1017 19:08:36.738622   85117 command_runner.go:130] > # This option supports live configuration reload.
	I1017 19:08:36.738626   85117 command_runner.go:130] > # pause_image_auth_file = ""
	I1017 19:08:36.738634   85117 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1017 19:08:36.738642   85117 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1017 19:08:36.738648   85117 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1017 19:08:36.738656   85117 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1017 19:08:36.738660   85117 command_runner.go:130] > # pause_command = "/pause"
	I1017 19:08:36.738668   85117 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1017 19:08:36.738674   85117 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1017 19:08:36.738690   85117 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1017 19:08:36.738700   85117 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1017 19:08:36.738709   85117 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1017 19:08:36.738718   85117 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1017 19:08:36.738722   85117 command_runner.go:130] > # pinned_images = [
	I1017 19:08:36.738727   85117 command_runner.go:130] > # ]
	I1017 19:08:36.738734   85117 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1017 19:08:36.738742   85117 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1017 19:08:36.738748   85117 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1017 19:08:36.738756   85117 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1017 19:08:36.738762   85117 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1017 19:08:36.738768   85117 command_runner.go:130] > # signature_policy = ""
	I1017 19:08:36.738773   85117 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1017 19:08:36.738781   85117 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1017 19:08:36.738787   85117 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1017 19:08:36.738792   85117 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1017 19:08:36.738798   85117 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1017 19:08:36.738802   85117 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1017 19:08:36.738808   85117 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1017 19:08:36.738813   85117 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1017 19:08:36.738817   85117 command_runner.go:130] > # changing them here.
	I1017 19:08:36.738820   85117 command_runner.go:130] > # insecure_registries = [
	I1017 19:08:36.738823   85117 command_runner.go:130] > # ]
	I1017 19:08:36.738828   85117 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1017 19:08:36.738833   85117 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1017 19:08:36.738836   85117 command_runner.go:130] > # image_volumes = "mkdir"
	I1017 19:08:36.738841   85117 command_runner.go:130] > # Temporary directory to use for storing big files
	I1017 19:08:36.738845   85117 command_runner.go:130] > # big_files_temporary_dir = ""
	I1017 19:08:36.738850   85117 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1017 19:08:36.738853   85117 command_runner.go:130] > # CNI plugins.
	I1017 19:08:36.738856   85117 command_runner.go:130] > [crio.network]
	I1017 19:08:36.738861   85117 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1017 19:08:36.738869   85117 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1017 19:08:36.738873   85117 command_runner.go:130] > # cni_default_network = ""
	I1017 19:08:36.738880   85117 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1017 19:08:36.738884   85117 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1017 19:08:36.738892   85117 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1017 19:08:36.738895   85117 command_runner.go:130] > # plugin_dirs = [
	I1017 19:08:36.738901   85117 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1017 19:08:36.738904   85117 command_runner.go:130] > # ]
	I1017 19:08:36.738909   85117 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1017 19:08:36.738915   85117 command_runner.go:130] > [crio.metrics]
	I1017 19:08:36.738919   85117 command_runner.go:130] > # Globally enable or disable metrics support.
	I1017 19:08:36.738925   85117 command_runner.go:130] > enable_metrics = true
	I1017 19:08:36.738929   85117 command_runner.go:130] > # Specify enabled metrics collectors.
	I1017 19:08:36.738939   85117 command_runner.go:130] > # Per default all metrics are enabled.
	I1017 19:08:36.738948   85117 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1017 19:08:36.738957   85117 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1017 19:08:36.738966   85117 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1017 19:08:36.738969   85117 command_runner.go:130] > # metrics_collectors = [
	I1017 19:08:36.738975   85117 command_runner.go:130] > # 	"operations",
	I1017 19:08:36.738980   85117 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1017 19:08:36.738988   85117 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1017 19:08:36.738992   85117 command_runner.go:130] > # 	"operations_errors",
	I1017 19:08:36.738998   85117 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1017 19:08:36.739002   85117 command_runner.go:130] > # 	"image_pulls_by_name",
	I1017 19:08:36.739008   85117 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1017 19:08:36.739012   85117 command_runner.go:130] > # 	"image_pulls_failures",
	I1017 19:08:36.739019   85117 command_runner.go:130] > # 	"image_pulls_successes",
	I1017 19:08:36.739022   85117 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1017 19:08:36.739029   85117 command_runner.go:130] > # 	"image_layer_reuse",
	I1017 19:08:36.739033   85117 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1017 19:08:36.739037   85117 command_runner.go:130] > # 	"containers_oom_total",
	I1017 19:08:36.739041   85117 command_runner.go:130] > # 	"containers_oom",
	I1017 19:08:36.739047   85117 command_runner.go:130] > # 	"processes_defunct",
	I1017 19:08:36.739050   85117 command_runner.go:130] > # 	"operations_total",
	I1017 19:08:36.739057   85117 command_runner.go:130] > # 	"operations_latency_seconds",
	I1017 19:08:36.739061   85117 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1017 19:08:36.739068   85117 command_runner.go:130] > # 	"operations_errors_total",
	I1017 19:08:36.739071   85117 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1017 19:08:36.739078   85117 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1017 19:08:36.739082   85117 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1017 19:08:36.739088   85117 command_runner.go:130] > # 	"image_pulls_success_total",
	I1017 19:08:36.739092   85117 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1017 19:08:36.739099   85117 command_runner.go:130] > # 	"containers_oom_count_total",
	I1017 19:08:36.739103   85117 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1017 19:08:36.739110   85117 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1017 19:08:36.739112   85117 command_runner.go:130] > # ]
	I1017 19:08:36.739119   85117 command_runner.go:130] > # The port on which the metrics server will listen.
	I1017 19:08:36.739125   85117 command_runner.go:130] > # metrics_port = 9090
	I1017 19:08:36.739132   85117 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1017 19:08:36.739136   85117 command_runner.go:130] > # metrics_socket = ""
	I1017 19:08:36.739143   85117 command_runner.go:130] > # The certificate for the secure metrics server.
	I1017 19:08:36.739148   85117 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1017 19:08:36.739156   85117 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1017 19:08:36.739161   85117 command_runner.go:130] > # certificate on any modification event.
	I1017 19:08:36.739165   85117 command_runner.go:130] > # metrics_cert = ""
	I1017 19:08:36.739170   85117 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1017 19:08:36.739176   85117 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1017 19:08:36.739180   85117 command_runner.go:130] > # metrics_key = ""
	I1017 19:08:36.739188   85117 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1017 19:08:36.739191   85117 command_runner.go:130] > [crio.tracing]
	I1017 19:08:36.739200   85117 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1017 19:08:36.739203   85117 command_runner.go:130] > # enable_tracing = false
	I1017 19:08:36.739214   85117 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1017 19:08:36.739221   85117 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1017 19:08:36.739227   85117 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1017 19:08:36.739240   85117 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1017 19:08:36.739246   85117 command_runner.go:130] > # CRI-O NRI configuration.
	I1017 19:08:36.739250   85117 command_runner.go:130] > [crio.nri]
	I1017 19:08:36.739254   85117 command_runner.go:130] > # Globally enable or disable NRI.
	I1017 19:08:36.739260   85117 command_runner.go:130] > # enable_nri = false
	I1017 19:08:36.739264   85117 command_runner.go:130] > # NRI socket to listen on.
	I1017 19:08:36.739271   85117 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1017 19:08:36.739275   85117 command_runner.go:130] > # NRI plugin directory to use.
	I1017 19:08:36.739280   85117 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1017 19:08:36.739287   85117 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1017 19:08:36.739291   85117 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1017 19:08:36.739299   85117 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1017 19:08:36.739303   85117 command_runner.go:130] > # nri_disable_connections = false
	I1017 19:08:36.739310   85117 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1017 19:08:36.739315   85117 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1017 19:08:36.739325   85117 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1017 19:08:36.739332   85117 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1017 19:08:36.739337   85117 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1017 19:08:36.739343   85117 command_runner.go:130] > [crio.stats]
	I1017 19:08:36.739348   85117 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1017 19:08:36.739353   85117 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1017 19:08:36.739360   85117 command_runner.go:130] > # stats_collection_period = 0
	I1017 19:08:36.739439   85117 cni.go:84] Creating CNI manager for ""
	I1017 19:08:36.739451   85117 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1017 19:08:36.739480   85117 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 19:08:36.739504   85117 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.205 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-016863 NodeName:functional-016863 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.205"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.205 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 19:08:36.739644   85117 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.205
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-016863"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.205"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.205"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 19:08:36.739707   85117 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:08:36.752377   85117 command_runner.go:130] > kubeadm
	I1017 19:08:36.752404   85117 command_runner.go:130] > kubectl
	I1017 19:08:36.752408   85117 command_runner.go:130] > kubelet
	I1017 19:08:36.752864   85117 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:08:36.752933   85117 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 19:08:36.764722   85117 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1017 19:08:36.786673   85117 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:08:36.808021   85117 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1017 19:08:36.828821   85117 ssh_runner.go:195] Run: grep 192.168.39.205	control-plane.minikube.internal$ /etc/hosts
	I1017 19:08:36.833177   85117 command_runner.go:130] > 192.168.39.205	control-plane.minikube.internal
	I1017 19:08:36.833246   85117 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:08:37.010934   85117 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:08:37.030439   85117 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863 for IP: 192.168.39.205
	I1017 19:08:37.030467   85117 certs.go:195] generating shared ca certs ...
	I1017 19:08:37.030485   85117 certs.go:227] acquiring lock for ca certs: {Name:mka410ab7d3b92eaaa0d0545223807c0ba196baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:08:37.030690   85117 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-75534/.minikube/ca.key
	I1017 19:08:37.030747   85117 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-75534/.minikube/proxy-client-ca.key
	I1017 19:08:37.030762   85117 certs.go:257] generating profile certs ...
	I1017 19:08:37.030878   85117 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/client.key
	I1017 19:08:37.030972   85117 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/apiserver.key.c24585d5
	I1017 19:08:37.031049   85117 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/proxy-client.key
	I1017 19:08:37.031067   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1017 19:08:37.031086   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1017 19:08:37.031102   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1017 19:08:37.031121   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1017 19:08:37.031138   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1017 19:08:37.031155   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1017 19:08:37.031179   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1017 19:08:37.031195   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1017 19:08:37.031270   85117 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/79439.pem (1338 bytes)
	W1017 19:08:37.031314   85117 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-75534/.minikube/certs/79439_empty.pem, impossibly tiny 0 bytes
	I1017 19:08:37.031328   85117 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca-key.pem (1679 bytes)
	I1017 19:08:37.031364   85117 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem (1082 bytes)
	I1017 19:08:37.031395   85117 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:08:37.031426   85117 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/key.pem (1679 bytes)
	I1017 19:08:37.031478   85117 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/ssl/certs/794392.pem (1708 bytes)
	I1017 19:08:37.031518   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/ssl/certs/794392.pem -> /usr/share/ca-certificates/794392.pem
	I1017 19:08:37.031537   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:08:37.031564   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/79439.pem -> /usr/share/ca-certificates/79439.pem
	I1017 19:08:37.032341   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:08:37.064212   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 19:08:37.094935   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:08:37.126973   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 19:08:37.157540   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1017 19:08:37.187168   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 19:08:37.217543   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 19:08:37.247400   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 19:08:37.278758   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/ssl/certs/794392.pem --> /usr/share/ca-certificates/794392.pem (1708 bytes)
	I1017 19:08:37.308088   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:08:37.338377   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/certs/79439.pem --> /usr/share/ca-certificates/79439.pem (1338 bytes)
	I1017 19:08:37.369350   85117 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 19:08:37.390154   85117 ssh_runner.go:195] Run: openssl version
	I1017 19:08:37.397183   85117 command_runner.go:130] > OpenSSL 3.4.1 11 Feb 2025 (Library: OpenSSL 3.4.1 11 Feb 2025)
	I1017 19:08:37.397310   85117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/79439.pem && ln -fs /usr/share/ca-certificates/79439.pem /etc/ssl/certs/79439.pem"
	I1017 19:08:37.411628   85117 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/79439.pem
	I1017 19:08:37.417085   85117 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 17 19:05 /usr/share/ca-certificates/79439.pem
	I1017 19:08:37.417178   85117 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:05 /usr/share/ca-certificates/79439.pem
	I1017 19:08:37.417250   85117 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/79439.pem
	I1017 19:08:37.424962   85117 command_runner.go:130] > 51391683
	I1017 19:08:37.425158   85117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/79439.pem /etc/ssl/certs/51391683.0"
	I1017 19:08:37.437578   85117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/794392.pem && ln -fs /usr/share/ca-certificates/794392.pem /etc/ssl/certs/794392.pem"
	I1017 19:08:37.452363   85117 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/794392.pem
	I1017 19:08:37.458096   85117 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 17 19:05 /usr/share/ca-certificates/794392.pem
	I1017 19:08:37.458164   85117 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:05 /usr/share/ca-certificates/794392.pem
	I1017 19:08:37.458223   85117 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/794392.pem
	I1017 19:08:37.466074   85117 command_runner.go:130] > 3ec20f2e
	I1017 19:08:37.466249   85117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/794392.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 19:08:37.478828   85117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:08:37.493772   85117 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:08:37.499621   85117 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 17 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:08:37.499822   85117 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:08:37.499886   85117 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:08:37.507945   85117 command_runner.go:130] > b5213941
	I1017 19:08:37.508223   85117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:08:37.520563   85117 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:08:37.526401   85117 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:08:37.526439   85117 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1017 19:08:37.526449   85117 command_runner.go:130] > Device: 253,1	Inode: 1054372     Links: 1
	I1017 19:08:37.526460   85117 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1017 19:08:37.526477   85117 command_runner.go:130] > Access: 2025-10-17 19:06:07.267694920 +0000
	I1017 19:08:37.526489   85117 command_runner.go:130] > Modify: 2025-10-17 19:06:07.267694920 +0000
	I1017 19:08:37.526500   85117 command_runner.go:130] > Change: 2025-10-17 19:06:07.267694920 +0000
	I1017 19:08:37.526510   85117 command_runner.go:130] >  Birth: 2025-10-17 19:06:07.267694920 +0000
	I1017 19:08:37.526610   85117 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 19:08:37.533974   85117 command_runner.go:130] > Certificate will not expire
	I1017 19:08:37.534188   85117 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 19:08:37.541725   85117 command_runner.go:130] > Certificate will not expire
	I1017 19:08:37.541833   85117 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 19:08:37.549277   85117 command_runner.go:130] > Certificate will not expire
	I1017 19:08:37.549348   85117 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 19:08:37.556865   85117 command_runner.go:130] > Certificate will not expire
	I1017 19:08:37.556943   85117 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 19:08:37.564379   85117 command_runner.go:130] > Certificate will not expire
	I1017 19:08:37.564452   85117 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 19:08:37.571575   85117 command_runner.go:130] > Certificate will not expire
	I1017 19:08:37.571807   85117 kubeadm.go:400] StartCluster: {Name:functional-016863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.1 ClusterName:functional-016863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:08:37.571943   85117 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:08:37.572009   85117 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:08:37.614275   85117 command_runner.go:130] > 5052ee3b4b13e54f7516a211d580d31d7e4856f34ebe5b5bc8a1778244018fb0
	I1017 19:08:37.614306   85117 command_runner.go:130] > 56c02355399031d32d66f1780ee1bc7396eeb5eb1b454f946254fe345879e8e0
	I1017 19:08:37.614315   85117 command_runner.go:130] > 56048147246b1d30ce16d066a4bbb216f1f7c9b1459e21fa60ee108fdd3aa42a
	I1017 19:08:37.614325   85117 command_runner.go:130] > 1b1f7dfe245a6d20e55f02381f27ec11e1eec3bf32b8112aaab88ea95c008e93
	I1017 19:08:37.614332   85117 command_runner.go:130] > b4db2cb7b47399fb64d0f31922d185a1ae009961ae05b56d9514db6f489a25eb
	I1017 19:08:37.614340   85117 command_runner.go:130] > d6eeaf9720fb0a5853cabba8afe0f0c64370fd422e21db9af2a9b6ce4b9aecc1
	I1017 19:08:37.614347   85117 command_runner.go:130] > 26c7a235bb67e91ab6abf0c0282c65a526c3d2fc628ec6008956402a02d5b1e8
	I1017 19:08:37.614369   85117 command_runner.go:130] > 171e623260fdb36d39493caf1c0b8c10efb097287233e2565304b12ece716a85
	I1017 19:08:37.614383   85117 command_runner.go:130] > 0fe4cc88e7a7a757f4debf7f3ff8f76bef81d0a36e83bd994df86baa42f47a71
	I1017 19:08:37.614397   85117 command_runner.go:130] > 86dba9687f70280ffaa952d354e90ec1a4ff74d73869d9360c56690901ad9461
	I1017 19:08:37.614406   85117 command_runner.go:130] > 4d4ae675fa012cf6e18dd10516f8c83d32b364f3e27d8068722234a797bc7b1a
	I1017 19:08:37.614460   85117 cri.go:89] found id: "5052ee3b4b13e54f7516a211d580d31d7e4856f34ebe5b5bc8a1778244018fb0"
	I1017 19:08:37.614475   85117 cri.go:89] found id: "56c02355399031d32d66f1780ee1bc7396eeb5eb1b454f946254fe345879e8e0"
	I1017 19:08:37.614481   85117 cri.go:89] found id: "56048147246b1d30ce16d066a4bbb216f1f7c9b1459e21fa60ee108fdd3aa42a"
	I1017 19:08:37.614486   85117 cri.go:89] found id: "1b1f7dfe245a6d20e55f02381f27ec11e1eec3bf32b8112aaab88ea95c008e93"
	I1017 19:08:37.614490   85117 cri.go:89] found id: "b4db2cb7b47399fb64d0f31922d185a1ae009961ae05b56d9514db6f489a25eb"
	I1017 19:08:37.614498   85117 cri.go:89] found id: "d6eeaf9720fb0a5853cabba8afe0f0c64370fd422e21db9af2a9b6ce4b9aecc1"
	I1017 19:08:37.614513   85117 cri.go:89] found id: "26c7a235bb67e91ab6abf0c0282c65a526c3d2fc628ec6008956402a02d5b1e8"
	I1017 19:08:37.614519   85117 cri.go:89] found id: "171e623260fdb36d39493caf1c0b8c10efb097287233e2565304b12ece716a85"
	I1017 19:08:37.614521   85117 cri.go:89] found id: "0fe4cc88e7a7a757f4debf7f3ff8f76bef81d0a36e83bd994df86baa42f47a71"
	I1017 19:08:37.614530   85117 cri.go:89] found id: "86dba9687f70280ffaa952d354e90ec1a4ff74d73869d9360c56690901ad9461"
	I1017 19:08:37.614535   85117 cri.go:89] found id: "4d4ae675fa012cf6e18dd10516f8c83d32b364f3e27d8068722234a797bc7b1a"
	I1017 19:08:37.614538   85117 cri.go:89] found id: ""
	I1017 19:08:37.614600   85117 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-016863 -n functional-016863
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-016863 -n functional-016863: exit status 2 (243.505629ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-016863" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (1234.90s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (394.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-016863 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-016863 get po -A: exit status 1 (53.459249ms)

                                                
                                                
** stderr ** 
	E1017 19:27:31.521640   89920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.205:8441/api?timeout=32s\": dial tcp 192.168.39.205:8441: connect: connection refused"
	E1017 19:27:31.522196   89920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.205:8441/api?timeout=32s\": dial tcp 192.168.39.205:8441: connect: connection refused"
	E1017 19:27:31.523731   89920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.205:8441/api?timeout=32s\": dial tcp 192.168.39.205:8441: connect: connection refused"
	E1017 19:27:31.524111   89920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.205:8441/api?timeout=32s\": dial tcp 192.168.39.205:8441: connect: connection refused"
	E1017 19:27:31.525632   89920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.205:8441/api?timeout=32s\": dial tcp 192.168.39.205:8441: connect: connection refused"
	The connection to the server 192.168.39.205:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-016863 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"E1017 19:27:31.521640   89920 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.39.205:8441/api?timeout=32s\\\": dial tcp 192.168.39.205:8441: connect: connection refused\"\nE1017 19:27:31.522196   89920 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.39.205:8441/api?timeout=32s\\\": dial tcp 192.168.39.205:8441: connect: connection refused\"\nE1017 19:27:31.523731   89920 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.39.205:8441/api?timeout=32s\\\": dial tcp 192.168.39.205:8441: connect: connection refused\"\nE1017 19:27:31.524111   89920 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.39.205:8441/api?timeout=32s\\\": dial tcp 192.168.39.205:8441: connect: connection refused\"\nE1017 19:27:31.525
632   89920 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.39.205:8441/api?timeout=32s\\\": dial tcp 192.168.39.205:8441: connect: connection refused\"\nThe connection to the server 192.168.39.205:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-016863 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-016863 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-016863 -n functional-016863
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-016863 -n functional-016863: exit status 2 (232.907308ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-016863 logs -n 25
E1017 19:28:47.749879   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:33:47.750836   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-016863 logs -n 25: (6m33.443540812s)
helpers_test.go:260: TestFunctional/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ addons-768633 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                           │ addons-768633     │ jenkins │ v1.37.0 │ 17 Oct 25 19:00 UTC │ 17 Oct 25 19:00 UTC │
	│ ip      │ addons-768633 ip                                                                                                                                  │ addons-768633     │ jenkins │ v1.37.0 │ 17 Oct 25 19:01 UTC │ 17 Oct 25 19:01 UTC │
	│ addons  │ addons-768633 addons disable ingress-dns --alsologtostderr -v=1                                                                                   │ addons-768633     │ jenkins │ v1.37.0 │ 17 Oct 25 19:01 UTC │ 17 Oct 25 19:01 UTC │
	│ addons  │ addons-768633 addons disable ingress --alsologtostderr -v=1                                                                                       │ addons-768633     │ jenkins │ v1.37.0 │ 17 Oct 25 19:01 UTC │ 17 Oct 25 19:01 UTC │
	│ stop    │ -p addons-768633                                                                                                                                  │ addons-768633     │ jenkins │ v1.37.0 │ 17 Oct 25 19:01 UTC │ 17 Oct 25 19:03 UTC │
	│ addons  │ enable dashboard -p addons-768633                                                                                                                 │ addons-768633     │ jenkins │ v1.37.0 │ 17 Oct 25 19:03 UTC │ 17 Oct 25 19:03 UTC │
	│ addons  │ disable dashboard -p addons-768633                                                                                                                │ addons-768633     │ jenkins │ v1.37.0 │ 17 Oct 25 19:03 UTC │ 17 Oct 25 19:03 UTC │
	│ addons  │ disable gvisor -p addons-768633                                                                                                                   │ addons-768633     │ jenkins │ v1.37.0 │ 17 Oct 25 19:03 UTC │ 17 Oct 25 19:03 UTC │
	│ delete  │ -p addons-768633                                                                                                                                  │ addons-768633     │ jenkins │ v1.37.0 │ 17 Oct 25 19:03 UTC │ 17 Oct 25 19:03 UTC │
	│ start   │ -p nospam-712449 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-712449 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:03 UTC │ 17 Oct 25 19:04 UTC │
	│ start   │ nospam-712449 --log_dir /tmp/nospam-712449 start --dry-run                                                                                        │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:04 UTC │                     │
	│ start   │ nospam-712449 --log_dir /tmp/nospam-712449 start --dry-run                                                                                        │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:04 UTC │                     │
	│ start   │ nospam-712449 --log_dir /tmp/nospam-712449 start --dry-run                                                                                        │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:04 UTC │                     │
	│ pause   │ nospam-712449 --log_dir /tmp/nospam-712449 pause                                                                                                  │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:04 UTC │ 17 Oct 25 19:04 UTC │
	│ pause   │ nospam-712449 --log_dir /tmp/nospam-712449 pause                                                                                                  │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:04 UTC │ 17 Oct 25 19:04 UTC │
	│ pause   │ nospam-712449 --log_dir /tmp/nospam-712449 pause                                                                                                  │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:04 UTC │ 17 Oct 25 19:04 UTC │
	│ unpause │ nospam-712449 --log_dir /tmp/nospam-712449 unpause                                                                                                │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:04 UTC │ 17 Oct 25 19:04 UTC │
	│ unpause │ nospam-712449 --log_dir /tmp/nospam-712449 unpause                                                                                                │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:04 UTC │ 17 Oct 25 19:04 UTC │
	│ unpause │ nospam-712449 --log_dir /tmp/nospam-712449 unpause                                                                                                │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:04 UTC │ 17 Oct 25 19:04 UTC │
	│ stop    │ nospam-712449 --log_dir /tmp/nospam-712449 stop                                                                                                   │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:04 UTC │ 17 Oct 25 19:05 UTC │
	│ stop    │ nospam-712449 --log_dir /tmp/nospam-712449 stop                                                                                                   │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:05 UTC │ 17 Oct 25 19:05 UTC │
	│ stop    │ nospam-712449 --log_dir /tmp/nospam-712449 stop                                                                                                   │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:05 UTC │ 17 Oct 25 19:05 UTC │
	│ delete  │ -p nospam-712449                                                                                                                                  │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:05 UTC │ 17 Oct 25 19:05 UTC │
	│ start   │ -p functional-016863 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false           │ functional-016863 │ jenkins │ v1.37.0 │ 17 Oct 25 19:05 UTC │ 17 Oct 25 19:06 UTC │
	│ start   │ -p functional-016863 --alsologtostderr -v=8                                                                                                       │ functional-016863 │ jenkins │ v1.37.0 │ 17 Oct 25 19:06 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:06:56
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:06:56.570682   85117 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:06:56.570809   85117 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:06:56.570820   85117 out.go:374] Setting ErrFile to fd 2...
	I1017 19:06:56.570826   85117 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:06:56.571105   85117 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-75534/.minikube/bin
	I1017 19:06:56.571578   85117 out.go:368] Setting JSON to false
	I1017 19:06:56.572426   85117 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6568,"bootTime":1760721449,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 19:06:56.572524   85117 start.go:141] virtualization: kvm guest
	I1017 19:06:56.574519   85117 out.go:179] * [functional-016863] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 19:06:56.575690   85117 notify.go:220] Checking for updates...
	I1017 19:06:56.575704   85117 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 19:06:56.577138   85117 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:06:56.578363   85117 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-75534/kubeconfig
	I1017 19:06:56.579669   85117 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-75534/.minikube
	I1017 19:06:56.581027   85117 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 19:06:56.582307   85117 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:06:56.583921   85117 config.go:182] Loaded profile config "functional-016863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:06:56.584037   85117 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:06:56.584492   85117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:06:56.584589   85117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:06:56.600478   85117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35877
	I1017 19:06:56.600991   85117 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:06:56.601750   85117 main.go:141] libmachine: Using API Version  1
	I1017 19:06:56.601786   85117 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:06:56.602161   85117 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:06:56.602390   85117 main.go:141] libmachine: (functional-016863) Calling .DriverName
	I1017 19:06:56.635697   85117 out.go:179] * Using the kvm2 driver based on existing profile
	I1017 19:06:56.637016   85117 start.go:305] selected driver: kvm2
	I1017 19:06:56.637040   85117 start.go:925] validating driver "kvm2" against &{Name:functional-016863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-016863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:06:56.637141   85117 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:06:56.637622   85117 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:06:56.637712   85117 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21753-75534/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1017 19:06:56.651574   85117 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1017 19:06:56.651619   85117 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21753-75534/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1017 19:06:56.665844   85117 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1017 19:06:56.666547   85117 cni.go:84] Creating CNI manager for ""
	I1017 19:06:56.666631   85117 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1017 19:06:56.666699   85117 start.go:349] cluster config:
	{Name:functional-016863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-016863 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:06:56.666812   85117 iso.go:125] acquiring lock: {Name:mk89d24a0bd9a0a8cf0564a4affa55e11eaff101 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:06:56.668638   85117 out.go:179] * Starting "functional-016863" primary control-plane node in "functional-016863" cluster
	I1017 19:06:56.669893   85117 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:06:56.669940   85117 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-75534/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 19:06:56.669951   85117 cache.go:58] Caching tarball of preloaded images
	I1017 19:06:56.670102   85117 preload.go:233] Found /home/jenkins/minikube-integration/21753-75534/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1017 19:06:56.670116   85117 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:06:56.670235   85117 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/config.json ...
	I1017 19:06:56.670445   85117 start.go:360] acquireMachinesLock for functional-016863: {Name:mke0c3abe726945d0c60793aa0bf26eb33df7fed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1017 19:06:56.670494   85117 start.go:364] duration metric: took 29.325µs to acquireMachinesLock for "functional-016863"
	I1017 19:06:56.670514   85117 start.go:96] Skipping create...Using existing machine configuration
	I1017 19:06:56.670524   85117 fix.go:54] fixHost starting: 
	I1017 19:06:56.670828   85117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:06:56.670877   85117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:06:56.683516   85117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42095
	I1017 19:06:56.683978   85117 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:06:56.684470   85117 main.go:141] libmachine: Using API Version  1
	I1017 19:06:56.684493   85117 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:06:56.684844   85117 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:06:56.685047   85117 main.go:141] libmachine: (functional-016863) Calling .DriverName
	I1017 19:06:56.685223   85117 main.go:141] libmachine: (functional-016863) Calling .GetState
	I1017 19:06:56.686913   85117 fix.go:112] recreateIfNeeded on functional-016863: state=Running err=<nil>
	W1017 19:06:56.686945   85117 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 19:06:56.688754   85117 out.go:252] * Updating the running kvm2 "functional-016863" VM ...
	I1017 19:06:56.688779   85117 machine.go:93] provisionDockerMachine start ...
	I1017 19:06:56.688795   85117 main.go:141] libmachine: (functional-016863) Calling .DriverName
	I1017 19:06:56.689021   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:06:56.691985   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:56.692501   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:06:56.692527   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:56.692713   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:06:56.692904   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:56.693142   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:56.693299   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:06:56.693474   85117 main.go:141] libmachine: Using SSH client type: native
	I1017 19:06:56.693724   85117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1017 19:06:56.693736   85117 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:06:56.799511   85117 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-016863
	
	I1017 19:06:56.799542   85117 main.go:141] libmachine: (functional-016863) Calling .GetMachineName
	I1017 19:06:56.799819   85117 buildroot.go:166] provisioning hostname "functional-016863"
	I1017 19:06:56.799862   85117 main.go:141] libmachine: (functional-016863) Calling .GetMachineName
	I1017 19:06:56.800154   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:06:56.803810   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:56.804342   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:06:56.804375   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:56.804593   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:06:56.804779   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:56.804950   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:56.805112   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:06:56.805279   85117 main.go:141] libmachine: Using SSH client type: native
	I1017 19:06:56.805490   85117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1017 19:06:56.805503   85117 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-016863 && echo "functional-016863" | sudo tee /etc/hostname
	I1017 19:06:56.929174   85117 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-016863
	
	I1017 19:06:56.929205   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:06:56.932429   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:56.932929   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:06:56.932954   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:56.933186   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:06:56.933423   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:56.933612   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:56.933826   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:06:56.934076   85117 main.go:141] libmachine: Using SSH client type: native
	I1017 19:06:56.934309   85117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1017 19:06:56.934326   85117 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-016863' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-016863/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-016863' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:06:57.042297   85117 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:06:57.042330   85117 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21753-75534/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-75534/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-75534/.minikube}
	I1017 19:06:57.042373   85117 buildroot.go:174] setting up certificates
	I1017 19:06:57.042382   85117 provision.go:84] configureAuth start
	I1017 19:06:57.042395   85117 main.go:141] libmachine: (functional-016863) Calling .GetMachineName
	I1017 19:06:57.042715   85117 main.go:141] libmachine: (functional-016863) Calling .GetIP
	I1017 19:06:57.045902   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:57.046469   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:06:57.046508   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:57.046778   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:06:57.049360   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:57.049857   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:06:57.049902   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:57.050076   85117 provision.go:143] copyHostCerts
	I1017 19:06:57.050123   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21753-75534/.minikube/ca.pem
	I1017 19:06:57.050183   85117 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-75534/.minikube/ca.pem, removing ...
	I1017 19:06:57.050205   85117 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-75534/.minikube/ca.pem
	I1017 19:06:57.050294   85117 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-75534/.minikube/ca.pem (1082 bytes)
	I1017 19:06:57.050425   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21753-75534/.minikube/cert.pem
	I1017 19:06:57.050463   85117 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-75534/.minikube/cert.pem, removing ...
	I1017 19:06:57.050473   85117 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-75534/.minikube/cert.pem
	I1017 19:06:57.050602   85117 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-75534/.minikube/cert.pem (1123 bytes)
	I1017 19:06:57.050772   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21753-75534/.minikube/key.pem
	I1017 19:06:57.050815   85117 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-75534/.minikube/key.pem, removing ...
	I1017 19:06:57.050825   85117 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-75534/.minikube/key.pem
	I1017 19:06:57.050881   85117 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-75534/.minikube/key.pem (1679 bytes)
	I1017 19:06:57.051013   85117 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-75534/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca-key.pem org=jenkins.functional-016863 san=[127.0.0.1 192.168.39.205 functional-016863 localhost minikube]
	I1017 19:06:57.269277   85117 provision.go:177] copyRemoteCerts
	I1017 19:06:57.269362   85117 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:06:57.269401   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:06:57.272458   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:57.272834   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:06:57.272866   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:57.273060   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:06:57.273266   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:57.273480   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:06:57.273640   85117 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/functional-016863/id_rsa Username:docker}
	I1017 19:06:57.362432   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1017 19:06:57.362511   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 19:06:57.412884   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1017 19:06:57.413107   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1017 19:06:57.450092   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1017 19:06:57.450212   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1017 19:06:57.486026   85117 provision.go:87] duration metric: took 443.605637ms to configureAuth
	I1017 19:06:57.486057   85117 buildroot.go:189] setting minikube options for container-runtime
	I1017 19:06:57.486228   85117 config.go:182] Loaded profile config "functional-016863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:06:57.486309   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:06:57.489476   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:57.489895   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:06:57.489928   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:57.490160   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:06:57.490354   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:57.490544   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:57.490703   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:06:57.490888   85117 main.go:141] libmachine: Using SSH client type: native
	I1017 19:06:57.491101   85117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1017 19:06:57.491114   85117 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:07:03.084984   85117 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:07:03.085021   85117 machine.go:96] duration metric: took 6.396234121s to provisionDockerMachine
	I1017 19:07:03.085042   85117 start.go:293] postStartSetup for "functional-016863" (driver="kvm2")
	I1017 19:07:03.085056   85117 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:07:03.085084   85117 main.go:141] libmachine: (functional-016863) Calling .DriverName
	I1017 19:07:03.085514   85117 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:07:03.085593   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:07:03.089211   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.089621   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:07:03.089655   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.089838   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:07:03.090055   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:07:03.090184   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:07:03.090354   85117 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/functional-016863/id_rsa Username:docker}
	I1017 19:07:03.173813   85117 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:07:03.179411   85117 command_runner.go:130] > NAME=Buildroot
	I1017 19:07:03.179437   85117 command_runner.go:130] > VERSION=2025.02-dirty
	I1017 19:07:03.179441   85117 command_runner.go:130] > ID=buildroot
	I1017 19:07:03.179446   85117 command_runner.go:130] > VERSION_ID=2025.02
	I1017 19:07:03.179452   85117 command_runner.go:130] > PRETTY_NAME="Buildroot 2025.02"
	I1017 19:07:03.179493   85117 info.go:137] Remote host: Buildroot 2025.02
	I1017 19:07:03.179508   85117 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-75534/.minikube/addons for local assets ...
	I1017 19:07:03.179595   85117 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-75534/.minikube/files for local assets ...
	I1017 19:07:03.179714   85117 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/ssl/certs/794392.pem -> 794392.pem in /etc/ssl/certs
	I1017 19:07:03.179729   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/ssl/certs/794392.pem -> /etc/ssl/certs/794392.pem
	I1017 19:07:03.179835   85117 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/test/nested/copy/79439/hosts -> hosts in /etc/test/nested/copy/79439
	I1017 19:07:03.179847   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/test/nested/copy/79439/hosts -> /etc/test/nested/copy/79439/hosts
	I1017 19:07:03.179893   85117 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/79439
	I1017 19:07:03.192128   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/ssl/certs/794392.pem --> /etc/ssl/certs/794392.pem (1708 bytes)
	I1017 19:07:03.223838   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/test/nested/copy/79439/hosts --> /etc/test/nested/copy/79439/hosts (40 bytes)
	I1017 19:07:03.313679   85117 start.go:296] duration metric: took 228.61978ms for postStartSetup
	I1017 19:07:03.313721   85117 fix.go:56] duration metric: took 6.643198174s for fixHost
	I1017 19:07:03.313742   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:07:03.317578   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.318077   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:07:03.318115   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.318367   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:07:03.318648   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:07:03.318838   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:07:03.319029   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:07:03.319295   85117 main.go:141] libmachine: Using SSH client type: native
	I1017 19:07:03.319597   85117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1017 19:07:03.319613   85117 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1017 19:07:03.479608   85117 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760728023.470011514
	
	I1017 19:07:03.479635   85117 fix.go:216] guest clock: 1760728023.470011514
	I1017 19:07:03.479642   85117 fix.go:229] Guest: 2025-10-17 19:07:03.470011514 +0000 UTC Remote: 2025-10-17 19:07:03.313724873 +0000 UTC m=+6.781586281 (delta=156.286641ms)
	I1017 19:07:03.479664   85117 fix.go:200] guest clock delta is within tolerance: 156.286641ms
	I1017 19:07:03.479671   85117 start.go:83] releasing machines lock for "functional-016863", held for 6.809163445s
	I1017 19:07:03.479692   85117 main.go:141] libmachine: (functional-016863) Calling .DriverName
	I1017 19:07:03.480016   85117 main.go:141] libmachine: (functional-016863) Calling .GetIP
	I1017 19:07:03.483255   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.483786   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:07:03.483830   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.484026   85117 main.go:141] libmachine: (functional-016863) Calling .DriverName
	I1017 19:07:03.484650   85117 main.go:141] libmachine: (functional-016863) Calling .DriverName
	I1017 19:07:03.484910   85117 main.go:141] libmachine: (functional-016863) Calling .DriverName
	I1017 19:07:03.485041   85117 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:07:03.485087   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:07:03.485146   85117 ssh_runner.go:195] Run: cat /version.json
	I1017 19:07:03.485170   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:07:03.488247   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.488613   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.488732   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:07:03.488760   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.488948   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:07:03.489117   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:07:03.489150   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:07:03.489166   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.489373   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:07:03.489440   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:07:03.489584   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:07:03.489660   85117 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/functional-016863/id_rsa Username:docker}
	I1017 19:07:03.489750   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:07:03.489896   85117 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/functional-016863/id_rsa Username:docker}
	I1017 19:07:03.669674   85117 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1017 19:07:03.669755   85117 command_runner.go:130] > {"iso_version": "v1.37.0-1760609724-21757", "kicbase_version": "v0.0.48-1760363564-21724", "minikube_version": "v1.37.0", "commit": "fd6729aa481bc45098452b0ed0ffbe097c29d1bb"}
	I1017 19:07:03.669885   85117 ssh_runner.go:195] Run: systemctl --version
	I1017 19:07:03.691813   85117 command_runner.go:130] > systemd 256 (256.7)
	I1017 19:07:03.691879   85117 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP -LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT -LIBARCHIVE
	I1017 19:07:03.691965   85117 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:07:03.942910   85117 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1017 19:07:03.963385   85117 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1017 19:07:03.963654   85117 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:07:03.963723   85117 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:07:04.004504   85117 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 19:07:04.004543   85117 start.go:495] detecting cgroup driver to use...
	I1017 19:07:04.004649   85117 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:07:04.048623   85117 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:07:04.093677   85117 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:07:04.093751   85117 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:07:04.125946   85117 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:07:04.177031   85117 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:07:04.556434   85117 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:07:04.871840   85117 docker.go:234] disabling docker service ...
	I1017 19:07:04.871920   85117 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:07:04.914455   85117 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:07:04.944209   85117 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:07:05.273173   85117 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:07:05.563772   85117 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:07:05.602259   85117 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:07:05.639391   85117 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1017 19:07:05.639452   85117 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:07:05.639509   85117 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:07:05.662293   85117 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 19:07:05.662360   85117 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:07:05.681766   85117 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:07:05.702415   85117 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:07:05.723309   85117 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:07:05.743334   85117 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:07:05.758794   85117 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:07:05.777348   85117 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:07:05.792297   85117 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:07:05.810337   85117 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1017 19:07:05.810427   85117 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:07:05.829378   85117 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:07:06.061473   85117 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:08:36.459335   85117 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.39776602s)
	I1017 19:08:36.459402   85117 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:08:36.459487   85117 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:08:36.466176   85117 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1017 19:08:36.466208   85117 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1017 19:08:36.466216   85117 command_runner.go:130] > Device: 0,23	Inode: 1978        Links: 1
	I1017 19:08:36.466222   85117 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1017 19:08:36.466229   85117 command_runner.go:130] > Access: 2025-10-17 19:08:36.354383352 +0000
	I1017 19:08:36.466239   85117 command_runner.go:130] > Modify: 2025-10-17 19:08:36.274379788 +0000
	I1017 19:08:36.466245   85117 command_runner.go:130] > Change: 2025-10-17 19:08:36.274379788 +0000
	I1017 19:08:36.466267   85117 command_runner.go:130] >  Birth: 2025-10-17 19:08:36.274379788 +0000
	I1017 19:08:36.466319   85117 start.go:563] Will wait 60s for crictl version
	I1017 19:08:36.466390   85117 ssh_runner.go:195] Run: which crictl
	I1017 19:08:36.470951   85117 command_runner.go:130] > /usr/bin/crictl
	I1017 19:08:36.471037   85117 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1017 19:08:36.516077   85117 command_runner.go:130] > Version:  0.1.0
	I1017 19:08:36.516101   85117 command_runner.go:130] > RuntimeName:  cri-o
	I1017 19:08:36.516106   85117 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1017 19:08:36.516111   85117 command_runner.go:130] > RuntimeApiVersion:  v1
	I1017 19:08:36.516132   85117 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1017 19:08:36.516223   85117 ssh_runner.go:195] Run: crio --version
	I1017 19:08:36.548879   85117 command_runner.go:130] > crio version 1.29.1
	I1017 19:08:36.548904   85117 command_runner.go:130] > Version:        1.29.1
	I1017 19:08:36.548909   85117 command_runner.go:130] > GitCommit:      unknown
	I1017 19:08:36.548925   85117 command_runner.go:130] > GitCommitDate:  unknown
	I1017 19:08:36.548929   85117 command_runner.go:130] > GitTreeState:   clean
	I1017 19:08:36.548935   85117 command_runner.go:130] > BuildDate:      2025-10-16T13:23:57Z
	I1017 19:08:36.548939   85117 command_runner.go:130] > GoVersion:      go1.23.4
	I1017 19:08:36.548942   85117 command_runner.go:130] > Compiler:       gc
	I1017 19:08:36.548947   85117 command_runner.go:130] > Platform:       linux/amd64
	I1017 19:08:36.548951   85117 command_runner.go:130] > Linkmode:       dynamic
	I1017 19:08:36.548955   85117 command_runner.go:130] > BuildTags:      
	I1017 19:08:36.548959   85117 command_runner.go:130] >   containers_image_ostree_stub
	I1017 19:08:36.548963   85117 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1017 19:08:36.548966   85117 command_runner.go:130] >   btrfs_noversion
	I1017 19:08:36.548970   85117 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1017 19:08:36.548974   85117 command_runner.go:130] >   libdm_no_deferred_remove
	I1017 19:08:36.548978   85117 command_runner.go:130] >   seccomp
	I1017 19:08:36.548982   85117 command_runner.go:130] > LDFlags:          unknown
	I1017 19:08:36.549001   85117 command_runner.go:130] > SeccompEnabled:   true
	I1017 19:08:36.549005   85117 command_runner.go:130] > AppArmorEnabled:  false
	I1017 19:08:36.549081   85117 ssh_runner.go:195] Run: crio --version
	I1017 19:08:36.579072   85117 command_runner.go:130] > crio version 1.29.1
	I1017 19:08:36.579097   85117 command_runner.go:130] > Version:        1.29.1
	I1017 19:08:36.579102   85117 command_runner.go:130] > GitCommit:      unknown
	I1017 19:08:36.579106   85117 command_runner.go:130] > GitCommitDate:  unknown
	I1017 19:08:36.579109   85117 command_runner.go:130] > GitTreeState:   clean
	I1017 19:08:36.579114   85117 command_runner.go:130] > BuildDate:      2025-10-16T13:23:57Z
	I1017 19:08:36.579118   85117 command_runner.go:130] > GoVersion:      go1.23.4
	I1017 19:08:36.579122   85117 command_runner.go:130] > Compiler:       gc
	I1017 19:08:36.579126   85117 command_runner.go:130] > Platform:       linux/amd64
	I1017 19:08:36.579129   85117 command_runner.go:130] > Linkmode:       dynamic
	I1017 19:08:36.579133   85117 command_runner.go:130] > BuildTags:      
	I1017 19:08:36.579137   85117 command_runner.go:130] >   containers_image_ostree_stub
	I1017 19:08:36.579141   85117 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1017 19:08:36.579144   85117 command_runner.go:130] >   btrfs_noversion
	I1017 19:08:36.579148   85117 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1017 19:08:36.579152   85117 command_runner.go:130] >   libdm_no_deferred_remove
	I1017 19:08:36.579156   85117 command_runner.go:130] >   seccomp
	I1017 19:08:36.579159   85117 command_runner.go:130] > LDFlags:          unknown
	I1017 19:08:36.579162   85117 command_runner.go:130] > SeccompEnabled:   true
	I1017 19:08:36.579166   85117 command_runner.go:130] > AppArmorEnabled:  false
	I1017 19:08:36.581921   85117 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1017 19:08:36.583156   85117 main.go:141] libmachine: (functional-016863) Calling .GetIP
	I1017 19:08:36.586303   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:08:36.586761   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:08:36.586791   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:08:36.587045   85117 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1017 19:08:36.592096   85117 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1017 19:08:36.592194   85117 kubeadm.go:883] updating cluster {Name:functional-016863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.1 ClusterName:functional-016863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 19:08:36.592323   85117 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:08:36.592384   85117 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:08:36.644213   85117 command_runner.go:130] > {
	I1017 19:08:36.644235   85117 command_runner.go:130] >   "images": [
	I1017 19:08:36.644239   85117 command_runner.go:130] >     {
	I1017 19:08:36.644246   85117 command_runner.go:130] >       "id": "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1017 19:08:36.644251   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.644257   85117 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1017 19:08:36.644260   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644265   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.644287   85117 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1017 19:08:36.644298   85117 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1017 19:08:36.644304   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644310   85117 command_runner.go:130] >       "size": "109379124",
	I1017 19:08:36.644319   85117 command_runner.go:130] >       "uid": null,
	I1017 19:08:36.644328   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.644357   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.644368   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.644379   85117 command_runner.go:130] >     },
	I1017 19:08:36.644384   85117 command_runner.go:130] >     {
	I1017 19:08:36.644397   85117 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1017 19:08:36.644403   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.644412   85117 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1017 19:08:36.644418   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644429   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.644441   85117 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1017 19:08:36.644455   85117 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1017 19:08:36.644463   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644489   85117 command_runner.go:130] >       "size": "31470524",
	I1017 19:08:36.644500   85117 command_runner.go:130] >       "uid": null,
	I1017 19:08:36.644506   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.644517   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.644524   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.644532   85117 command_runner.go:130] >     },
	I1017 19:08:36.644537   85117 command_runner.go:130] >     {
	I1017 19:08:36.644546   85117 command_runner.go:130] >       "id": "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1017 19:08:36.644570   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.644577   85117 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1017 19:08:36.644586   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644592   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.644602   85117 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1017 19:08:36.644610   85117 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1017 19:08:36.644616   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644620   85117 command_runner.go:130] >       "size": "76103547",
	I1017 19:08:36.644623   85117 command_runner.go:130] >       "uid": null,
	I1017 19:08:36.644628   85117 command_runner.go:130] >       "username": "nonroot",
	I1017 19:08:36.644634   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.644638   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.644644   85117 command_runner.go:130] >     },
	I1017 19:08:36.644655   85117 command_runner.go:130] >     {
	I1017 19:08:36.644664   85117 command_runner.go:130] >       "id": "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1017 19:08:36.644668   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.644675   85117 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1017 19:08:36.644678   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644685   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.644692   85117 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1017 19:08:36.644707   85117 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1017 19:08:36.644713   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644716   85117 command_runner.go:130] >       "size": "195976448",
	I1017 19:08:36.644720   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.644726   85117 command_runner.go:130] >         "value": "0"
	I1017 19:08:36.644729   85117 command_runner.go:130] >       },
	I1017 19:08:36.644733   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.644737   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.644741   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.644744   85117 command_runner.go:130] >     },
	I1017 19:08:36.644747   85117 command_runner.go:130] >     {
	I1017 19:08:36.644753   85117 command_runner.go:130] >       "id": "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1017 19:08:36.644760   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.644764   85117 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1017 19:08:36.644767   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644772   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.644781   85117 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1017 19:08:36.644788   85117 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1017 19:08:36.644794   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644798   85117 command_runner.go:130] >       "size": "89046001",
	I1017 19:08:36.644802   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.644806   85117 command_runner.go:130] >         "value": "0"
	I1017 19:08:36.644810   85117 command_runner.go:130] >       },
	I1017 19:08:36.644813   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.644819   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.644822   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.644830   85117 command_runner.go:130] >     },
	I1017 19:08:36.644836   85117 command_runner.go:130] >     {
	I1017 19:08:36.644842   85117 command_runner.go:130] >       "id": "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1017 19:08:36.644845   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.644850   85117 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1017 19:08:36.644856   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644860   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.644868   85117 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1017 19:08:36.644877   85117 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1017 19:08:36.644880   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644884   85117 command_runner.go:130] >       "size": "76004181",
	I1017 19:08:36.644888   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.644892   85117 command_runner.go:130] >         "value": "0"
	I1017 19:08:36.644895   85117 command_runner.go:130] >       },
	I1017 19:08:36.644899   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.644902   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.644908   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.644911   85117 command_runner.go:130] >     },
	I1017 19:08:36.644914   85117 command_runner.go:130] >     {
	I1017 19:08:36.644920   85117 command_runner.go:130] >       "id": "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1017 19:08:36.644924   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.644928   85117 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1017 19:08:36.644932   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644944   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.644951   85117 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1017 19:08:36.644958   85117 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1017 19:08:36.644961   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644964   85117 command_runner.go:130] >       "size": "73138073",
	I1017 19:08:36.644968   85117 command_runner.go:130] >       "uid": null,
	I1017 19:08:36.644972   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.644975   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.644979   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.644982   85117 command_runner.go:130] >     },
	I1017 19:08:36.644991   85117 command_runner.go:130] >     {
	I1017 19:08:36.644999   85117 command_runner.go:130] >       "id": "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1017 19:08:36.645003   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.645010   85117 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1017 19:08:36.645013   85117 command_runner.go:130] >       ],
	I1017 19:08:36.645017   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.645041   85117 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1017 19:08:36.645052   85117 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1017 19:08:36.645055   85117 command_runner.go:130] >       ],
	I1017 19:08:36.645059   85117 command_runner.go:130] >       "size": "53844823",
	I1017 19:08:36.645062   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.645066   85117 command_runner.go:130] >         "value": "0"
	I1017 19:08:36.645068   85117 command_runner.go:130] >       },
	I1017 19:08:36.645072   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.645075   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.645079   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.645081   85117 command_runner.go:130] >     },
	I1017 19:08:36.645084   85117 command_runner.go:130] >     {
	I1017 19:08:36.645090   85117 command_runner.go:130] >       "id": "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1017 19:08:36.645093   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.645097   85117 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1017 19:08:36.645100   85117 command_runner.go:130] >       ],
	I1017 19:08:36.645104   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.645110   85117 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1017 19:08:36.645116   85117 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1017 19:08:36.645120   85117 command_runner.go:130] >       ],
	I1017 19:08:36.645123   85117 command_runner.go:130] >       "size": "742092",
	I1017 19:08:36.645126   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.645129   85117 command_runner.go:130] >         "value": "65535"
	I1017 19:08:36.645132   85117 command_runner.go:130] >       },
	I1017 19:08:36.645136   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.645143   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.645147   85117 command_runner.go:130] >       "pinned": true
	I1017 19:08:36.645154   85117 command_runner.go:130] >     }
	I1017 19:08:36.645157   85117 command_runner.go:130] >   ]
	I1017 19:08:36.645160   85117 command_runner.go:130] > }
	I1017 19:08:36.645398   85117 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:08:36.645415   85117 crio.go:433] Images already preloaded, skipping extraction
	I1017 19:08:36.645478   85117 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:08:36.684800   85117 command_runner.go:130] > {
	I1017 19:08:36.684832   85117 command_runner.go:130] >   "images": [
	I1017 19:08:36.684855   85117 command_runner.go:130] >     {
	I1017 19:08:36.684869   85117 command_runner.go:130] >       "id": "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1017 19:08:36.684877   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.684887   85117 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1017 19:08:36.684892   85117 command_runner.go:130] >       ],
	I1017 19:08:36.684896   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.684909   85117 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1017 19:08:36.684916   85117 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1017 19:08:36.684919   85117 command_runner.go:130] >       ],
	I1017 19:08:36.684923   85117 command_runner.go:130] >       "size": "109379124",
	I1017 19:08:36.684927   85117 command_runner.go:130] >       "uid": null,
	I1017 19:08:36.684930   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.684935   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.684938   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.684942   85117 command_runner.go:130] >     },
	I1017 19:08:36.684945   85117 command_runner.go:130] >     {
	I1017 19:08:36.684950   85117 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1017 19:08:36.684955   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.684960   85117 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1017 19:08:36.684973   85117 command_runner.go:130] >       ],
	I1017 19:08:36.684980   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.684994   85117 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1017 19:08:36.685002   85117 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1017 19:08:36.685005   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685013   85117 command_runner.go:130] >       "size": "31470524",
	I1017 19:08:36.685018   85117 command_runner.go:130] >       "uid": null,
	I1017 19:08:36.685021   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.685025   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.685029   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.685032   85117 command_runner.go:130] >     },
	I1017 19:08:36.685035   85117 command_runner.go:130] >     {
	I1017 19:08:36.685041   85117 command_runner.go:130] >       "id": "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1017 19:08:36.685045   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.685055   85117 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1017 19:08:36.685061   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685064   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.685072   85117 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1017 19:08:36.685081   85117 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1017 19:08:36.685084   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685088   85117 command_runner.go:130] >       "size": "76103547",
	I1017 19:08:36.685092   85117 command_runner.go:130] >       "uid": null,
	I1017 19:08:36.685095   85117 command_runner.go:130] >       "username": "nonroot",
	I1017 19:08:36.685098   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.685105   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.685108   85117 command_runner.go:130] >     },
	I1017 19:08:36.685111   85117 command_runner.go:130] >     {
	I1017 19:08:36.685116   85117 command_runner.go:130] >       "id": "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1017 19:08:36.685121   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.685125   85117 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1017 19:08:36.685128   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685132   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.685140   85117 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1017 19:08:36.685152   85117 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1017 19:08:36.685158   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685162   85117 command_runner.go:130] >       "size": "195976448",
	I1017 19:08:36.685165   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.685169   85117 command_runner.go:130] >         "value": "0"
	I1017 19:08:36.685172   85117 command_runner.go:130] >       },
	I1017 19:08:36.685176   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.685179   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.685183   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.685186   85117 command_runner.go:130] >     },
	I1017 19:08:36.685195   85117 command_runner.go:130] >     {
	I1017 19:08:36.685202   85117 command_runner.go:130] >       "id": "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1017 19:08:36.685205   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.685209   85117 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1017 19:08:36.685217   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685224   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.685230   85117 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1017 19:08:36.685243   85117 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1017 19:08:36.685249   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685252   85117 command_runner.go:130] >       "size": "89046001",
	I1017 19:08:36.685256   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.685259   85117 command_runner.go:130] >         "value": "0"
	I1017 19:08:36.685263   85117 command_runner.go:130] >       },
	I1017 19:08:36.685266   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.685270   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.685274   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.685277   85117 command_runner.go:130] >     },
	I1017 19:08:36.685280   85117 command_runner.go:130] >     {
	I1017 19:08:36.685292   85117 command_runner.go:130] >       "id": "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1017 19:08:36.685301   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.685310   85117 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1017 19:08:36.685322   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685332   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.685344   85117 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1017 19:08:36.685361   85117 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1017 19:08:36.685371   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685378   85117 command_runner.go:130] >       "size": "76004181",
	I1017 19:08:36.685388   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.685394   85117 command_runner.go:130] >         "value": "0"
	I1017 19:08:36.685403   85117 command_runner.go:130] >       },
	I1017 19:08:36.685407   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.685414   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.685418   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.685421   85117 command_runner.go:130] >     },
	I1017 19:08:36.685424   85117 command_runner.go:130] >     {
	I1017 19:08:36.685430   85117 command_runner.go:130] >       "id": "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1017 19:08:36.685437   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.685448   85117 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1017 19:08:36.685454   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685457   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.685464   85117 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1017 19:08:36.685473   85117 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1017 19:08:36.685476   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685483   85117 command_runner.go:130] >       "size": "73138073",
	I1017 19:08:36.685487   85117 command_runner.go:130] >       "uid": null,
	I1017 19:08:36.685491   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.685495   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.685498   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.685502   85117 command_runner.go:130] >     },
	I1017 19:08:36.685505   85117 command_runner.go:130] >     {
	I1017 19:08:36.685511   85117 command_runner.go:130] >       "id": "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1017 19:08:36.685517   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.685522   85117 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1017 19:08:36.685528   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685531   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.685577   85117 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1017 19:08:36.685591   85117 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1017 19:08:36.685594   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685598   85117 command_runner.go:130] >       "size": "53844823",
	I1017 19:08:36.685601   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.685604   85117 command_runner.go:130] >         "value": "0"
	I1017 19:08:36.685607   85117 command_runner.go:130] >       },
	I1017 19:08:36.685611   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.685614   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.685618   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.685621   85117 command_runner.go:130] >     },
	I1017 19:08:36.685624   85117 command_runner.go:130] >     {
	I1017 19:08:36.685629   85117 command_runner.go:130] >       "id": "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1017 19:08:36.685638   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.685642   85117 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1017 19:08:36.685651   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685658   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.685664   85117 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1017 19:08:36.685673   85117 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1017 19:08:36.685677   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685680   85117 command_runner.go:130] >       "size": "742092",
	I1017 19:08:36.685684   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.685688   85117 command_runner.go:130] >         "value": "65535"
	I1017 19:08:36.685691   85117 command_runner.go:130] >       },
	I1017 19:08:36.685697   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.685700   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.685703   85117 command_runner.go:130] >       "pinned": true
	I1017 19:08:36.685706   85117 command_runner.go:130] >     }
	I1017 19:08:36.685711   85117 command_runner.go:130] >   ]
	I1017 19:08:36.685714   85117 command_runner.go:130] > }
	I1017 19:08:36.685822   85117 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:08:36.685834   85117 cache_images.go:85] Images are preloaded, skipping loading
	I1017 19:08:36.685842   85117 kubeadm.go:934] updating node { 192.168.39.205 8441 v1.34.1 crio true true} ...
	I1017 19:08:36.685955   85117 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-016863 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.205
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-016863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:08:36.686028   85117 ssh_runner.go:195] Run: crio config
	I1017 19:08:36.721698   85117 command_runner.go:130] ! time="2025-10-17 19:08:36.711815300Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1017 19:08:36.726934   85117 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1017 19:08:36.733071   85117 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1017 19:08:36.733099   85117 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1017 19:08:36.733109   85117 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1017 19:08:36.733113   85117 command_runner.go:130] > #
	I1017 19:08:36.733123   85117 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1017 19:08:36.733131   85117 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1017 19:08:36.733140   85117 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1017 19:08:36.733156   85117 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1017 19:08:36.733165   85117 command_runner.go:130] > # reload'.
	I1017 19:08:36.733177   85117 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1017 19:08:36.733189   85117 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1017 19:08:36.733199   85117 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1017 19:08:36.733209   85117 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1017 19:08:36.733222   85117 command_runner.go:130] > [crio]
	I1017 19:08:36.733230   85117 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1017 19:08:36.733234   85117 command_runner.go:130] > # containers images, in this directory.
	I1017 19:08:36.733241   85117 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1017 19:08:36.733256   85117 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1017 19:08:36.733263   85117 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1017 19:08:36.733270   85117 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1017 19:08:36.733277   85117 command_runner.go:130] > # imagestore = ""
	I1017 19:08:36.733283   85117 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1017 19:08:36.733291   85117 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1017 19:08:36.733296   85117 command_runner.go:130] > # storage_driver = "overlay"
	I1017 19:08:36.733307   85117 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1017 19:08:36.733320   85117 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1017 19:08:36.733327   85117 command_runner.go:130] > storage_option = [
	I1017 19:08:36.733337   85117 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1017 19:08:36.733342   85117 command_runner.go:130] > ]
	I1017 19:08:36.733354   85117 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1017 19:08:36.733363   85117 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1017 19:08:36.733368   85117 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1017 19:08:36.733374   85117 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1017 19:08:36.733380   85117 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1017 19:08:36.733387   85117 command_runner.go:130] > # always happen on a node reboot
	I1017 19:08:36.733391   85117 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1017 19:08:36.733411   85117 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1017 19:08:36.733424   85117 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1017 19:08:36.733432   85117 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1017 19:08:36.733443   85117 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1017 19:08:36.733456   85117 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1017 19:08:36.733470   85117 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1017 19:08:36.733480   85117 command_runner.go:130] > # internal_wipe = true
	I1017 19:08:36.733489   85117 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1017 19:08:36.733497   85117 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1017 19:08:36.733504   85117 command_runner.go:130] > # internal_repair = false
	I1017 19:08:36.733522   85117 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1017 19:08:36.733534   85117 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1017 19:08:36.733544   85117 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1017 19:08:36.733565   85117 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1017 19:08:36.733582   85117 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1017 19:08:36.733590   85117 command_runner.go:130] > [crio.api]
	I1017 19:08:36.733598   85117 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1017 19:08:36.733608   85117 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1017 19:08:36.733616   85117 command_runner.go:130] > # IP address on which the stream server will listen.
	I1017 19:08:36.733626   85117 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1017 19:08:36.733636   85117 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1017 19:08:36.733647   85117 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1017 19:08:36.733653   85117 command_runner.go:130] > # stream_port = "0"
	I1017 19:08:36.733665   85117 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1017 19:08:36.733671   85117 command_runner.go:130] > # stream_enable_tls = false
	I1017 19:08:36.733683   85117 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1017 19:08:36.733692   85117 command_runner.go:130] > # stream_idle_timeout = ""
	I1017 19:08:36.733699   85117 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1017 19:08:36.733709   85117 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1017 19:08:36.733719   85117 command_runner.go:130] > # minutes.
	I1017 19:08:36.733729   85117 command_runner.go:130] > # stream_tls_cert = ""
	I1017 19:08:36.733738   85117 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1017 19:08:36.733749   85117 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1017 19:08:36.733755   85117 command_runner.go:130] > # stream_tls_key = ""
	I1017 19:08:36.733767   85117 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1017 19:08:36.733777   85117 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1017 19:08:36.733807   85117 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1017 19:08:36.733817   85117 command_runner.go:130] > # stream_tls_ca = ""
	I1017 19:08:36.733828   85117 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1017 19:08:36.733839   85117 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1017 19:08:36.733850   85117 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1017 19:08:36.733860   85117 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1017 19:08:36.733870   85117 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1017 19:08:36.733888   85117 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1017 19:08:36.733894   85117 command_runner.go:130] > [crio.runtime]
	I1017 19:08:36.733902   85117 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1017 19:08:36.733914   85117 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1017 19:08:36.733923   85117 command_runner.go:130] > # "nofile=1024:2048"
	I1017 19:08:36.733936   85117 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1017 19:08:36.733945   85117 command_runner.go:130] > # default_ulimits = [
	I1017 19:08:36.733950   85117 command_runner.go:130] > # ]
	I1017 19:08:36.733961   85117 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1017 19:08:36.733966   85117 command_runner.go:130] > # no_pivot = false
	I1017 19:08:36.733974   85117 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1017 19:08:36.733984   85117 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1017 19:08:36.733990   85117 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1017 19:08:36.734005   85117 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1017 19:08:36.734017   85117 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1017 19:08:36.734041   85117 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1017 19:08:36.734050   85117 command_runner.go:130] > conmon = "/usr/bin/conmon"
	I1017 19:08:36.734057   85117 command_runner.go:130] > # Cgroup setting for conmon
	I1017 19:08:36.734070   85117 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1017 19:08:36.734079   85117 command_runner.go:130] > conmon_cgroup = "pod"
	I1017 19:08:36.734085   85117 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1017 19:08:36.734096   85117 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1017 19:08:36.734105   85117 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1017 19:08:36.734115   85117 command_runner.go:130] > conmon_env = [
	I1017 19:08:36.734124   85117 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1017 19:08:36.734133   85117 command_runner.go:130] > ]
	I1017 19:08:36.734142   85117 command_runner.go:130] > # Additional environment variables to set for all the
	I1017 19:08:36.734152   85117 command_runner.go:130] > # containers. These are overridden if set in the
	I1017 19:08:36.734161   85117 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1017 19:08:36.734170   85117 command_runner.go:130] > # default_env = [
	I1017 19:08:36.734175   85117 command_runner.go:130] > # ]
	I1017 19:08:36.734186   85117 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1017 19:08:36.734193   85117 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1017 19:08:36.734374   85117 command_runner.go:130] > # selinux = false
	I1017 19:08:36.734484   85117 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1017 19:08:36.734495   85117 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1017 19:08:36.734505   85117 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1017 19:08:36.734516   85117 command_runner.go:130] > # seccomp_profile = ""
	I1017 19:08:36.734531   85117 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1017 19:08:36.734543   85117 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1017 19:08:36.734567   85117 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1017 19:08:36.734585   85117 command_runner.go:130] > # which might increase security.
	I1017 19:08:36.734593   85117 command_runner.go:130] > # This option is currently deprecated,
	I1017 19:08:36.734610   85117 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1017 19:08:36.734624   85117 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1017 19:08:36.734634   85117 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1017 19:08:36.734646   85117 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1017 19:08:36.734697   85117 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1017 19:08:36.735591   85117 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1017 19:08:36.735609   85117 command_runner.go:130] > # This option supports live configuration reload.
	I1017 19:08:36.735623   85117 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1017 19:08:36.735636   85117 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1017 19:08:36.735643   85117 command_runner.go:130] > # the cgroup blockio controller.
	I1017 19:08:36.735656   85117 command_runner.go:130] > # blockio_config_file = ""
	I1017 19:08:36.735670   85117 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1017 19:08:36.735675   85117 command_runner.go:130] > # blockio parameters.
	I1017 19:08:36.735681   85117 command_runner.go:130] > # blockio_reload = false
	I1017 19:08:36.735706   85117 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1017 19:08:36.735733   85117 command_runner.go:130] > # irqbalance daemon.
	I1017 19:08:36.735812   85117 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1017 19:08:36.735833   85117 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1017 19:08:36.736170   85117 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1017 19:08:36.736193   85117 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1017 19:08:36.736203   85117 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1017 19:08:36.736229   85117 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1017 19:08:36.736240   85117 command_runner.go:130] > # This option supports live configuration reload.
	I1017 19:08:36.736246   85117 command_runner.go:130] > # rdt_config_file = ""
	I1017 19:08:36.736258   85117 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1017 19:08:36.736268   85117 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1017 19:08:36.736300   85117 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1017 19:08:36.736312   85117 command_runner.go:130] > # separate_pull_cgroup = ""
	I1017 19:08:36.736321   85117 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1017 19:08:36.736329   85117 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1017 19:08:36.736335   85117 command_runner.go:130] > # will be added.
	I1017 19:08:36.736341   85117 command_runner.go:130] > # default_capabilities = [
	I1017 19:08:36.736349   85117 command_runner.go:130] > # 	"CHOWN",
	I1017 19:08:36.736355   85117 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1017 19:08:36.736360   85117 command_runner.go:130] > # 	"FSETID",
	I1017 19:08:36.736366   85117 command_runner.go:130] > # 	"FOWNER",
	I1017 19:08:36.736374   85117 command_runner.go:130] > # 	"SETGID",
	I1017 19:08:36.736379   85117 command_runner.go:130] > # 	"SETUID",
	I1017 19:08:36.736384   85117 command_runner.go:130] > # 	"SETPCAP",
	I1017 19:08:36.736392   85117 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1017 19:08:36.736401   85117 command_runner.go:130] > # 	"KILL",
	I1017 19:08:36.736409   85117 command_runner.go:130] > # ]
	I1017 19:08:36.736420   85117 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1017 19:08:36.736433   85117 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1017 19:08:36.736444   85117 command_runner.go:130] > # add_inheritable_capabilities = false
	I1017 19:08:36.736452   85117 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1017 19:08:36.736463   85117 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1017 19:08:36.736472   85117 command_runner.go:130] > default_sysctls = [
	I1017 19:08:36.736482   85117 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1017 19:08:36.736490   85117 command_runner.go:130] > ]
	I1017 19:08:36.736501   85117 command_runner.go:130] > # List of devices on the host that a
	I1017 19:08:36.736513   85117 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1017 19:08:36.736521   85117 command_runner.go:130] > # allowed_devices = [
	I1017 19:08:36.736526   85117 command_runner.go:130] > # 	"/dev/fuse",
	I1017 19:08:36.736534   85117 command_runner.go:130] > # ]
	I1017 19:08:36.736541   85117 command_runner.go:130] > # List of additional devices. specified as
	I1017 19:08:36.736569   85117 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1017 19:08:36.736580   85117 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1017 19:08:36.736589   85117 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1017 19:08:36.736598   85117 command_runner.go:130] > # additional_devices = [
	I1017 19:08:36.736602   85117 command_runner.go:130] > # ]
	I1017 19:08:36.736612   85117 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1017 19:08:36.736621   85117 command_runner.go:130] > # cdi_spec_dirs = [
	I1017 19:08:36.736627   85117 command_runner.go:130] > # 	"/etc/cdi",
	I1017 19:08:36.736635   85117 command_runner.go:130] > # 	"/var/run/cdi",
	I1017 19:08:36.736640   85117 command_runner.go:130] > # ]
	I1017 19:08:36.736652   85117 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1017 19:08:36.736664   85117 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1017 19:08:36.736673   85117 command_runner.go:130] > # Defaults to false.
	I1017 19:08:36.736684   85117 command_runner.go:130] > # device_ownership_from_security_context = false
	I1017 19:08:36.736696   85117 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1017 19:08:36.736707   85117 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1017 19:08:36.736715   85117 command_runner.go:130] > # hooks_dir = [
	I1017 19:08:36.736723   85117 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1017 19:08:36.736732   85117 command_runner.go:130] > # ]
	I1017 19:08:36.736744   85117 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1017 19:08:36.736756   85117 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1017 19:08:36.736767   85117 command_runner.go:130] > # its default mounts from the following two files:
	I1017 19:08:36.736774   85117 command_runner.go:130] > #
	I1017 19:08:36.736783   85117 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1017 19:08:36.736795   85117 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1017 19:08:36.736809   85117 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1017 19:08:36.736817   85117 command_runner.go:130] > #
	I1017 19:08:36.736826   85117 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1017 19:08:36.736838   85117 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1017 19:08:36.736850   85117 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1017 19:08:36.736858   85117 command_runner.go:130] > #      only add mounts it finds in this file.
	I1017 19:08:36.736865   85117 command_runner.go:130] > #
	I1017 19:08:36.736871   85117 command_runner.go:130] > # default_mounts_file = ""
	I1017 19:08:36.736882   85117 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1017 19:08:36.736894   85117 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1017 19:08:36.736914   85117 command_runner.go:130] > pids_limit = 1024
	I1017 19:08:36.736938   85117 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1017 19:08:36.736957   85117 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1017 19:08:36.736976   85117 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1017 19:08:36.737004   85117 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1017 19:08:36.737015   85117 command_runner.go:130] > # log_size_max = -1
	I1017 19:08:36.737028   85117 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1017 19:08:36.737037   85117 command_runner.go:130] > # log_to_journald = false
	I1017 19:08:36.737051   85117 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1017 19:08:36.737062   85117 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1017 19:08:36.737073   85117 command_runner.go:130] > # Path to directory for container attach sockets.
	I1017 19:08:36.737084   85117 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1017 19:08:36.737094   85117 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1017 19:08:36.737102   85117 command_runner.go:130] > # bind_mount_prefix = ""
	I1017 19:08:36.737107   85117 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1017 19:08:36.737113   85117 command_runner.go:130] > # read_only = false
	I1017 19:08:36.737122   85117 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1017 19:08:36.737131   85117 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1017 19:08:36.737137   85117 command_runner.go:130] > # live configuration reload.
	I1017 19:08:36.737141   85117 command_runner.go:130] > # log_level = "info"
	I1017 19:08:36.737149   85117 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1017 19:08:36.737153   85117 command_runner.go:130] > # This option supports live configuration reload.
	I1017 19:08:36.737159   85117 command_runner.go:130] > # log_filter = ""
	I1017 19:08:36.737165   85117 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1017 19:08:36.737175   85117 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1017 19:08:36.737181   85117 command_runner.go:130] > # separated by comma.
	I1017 19:08:36.737189   85117 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1017 19:08:36.737199   85117 command_runner.go:130] > # uid_mappings = ""
	I1017 19:08:36.737214   85117 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1017 19:08:36.737222   85117 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1017 19:08:36.737227   85117 command_runner.go:130] > # separated by comma.
	I1017 19:08:36.737234   85117 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1017 19:08:36.737238   85117 command_runner.go:130] > # gid_mappings = ""
	I1017 19:08:36.737244   85117 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1017 19:08:36.737252   85117 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1017 19:08:36.737258   85117 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1017 19:08:36.737268   85117 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1017 19:08:36.737274   85117 command_runner.go:130] > # minimum_mappable_uid = -1
	I1017 19:08:36.737280   85117 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1017 19:08:36.737285   85117 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1017 19:08:36.737293   85117 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1017 19:08:36.737301   85117 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1017 19:08:36.737306   85117 command_runner.go:130] > # minimum_mappable_gid = -1
	I1017 19:08:36.737312   85117 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1017 19:08:36.737318   85117 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1017 19:08:36.737326   85117 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1017 19:08:36.737330   85117 command_runner.go:130] > # ctr_stop_timeout = 30
	I1017 19:08:36.737335   85117 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1017 19:08:36.737343   85117 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1017 19:08:36.737349   85117 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1017 19:08:36.737354   85117 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1017 19:08:36.737360   85117 command_runner.go:130] > drop_infra_ctr = false
	I1017 19:08:36.737365   85117 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1017 19:08:36.737370   85117 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1017 19:08:36.737377   85117 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1017 19:08:36.737382   85117 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1017 19:08:36.737388   85117 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1017 19:08:36.737396   85117 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1017 19:08:36.737402   85117 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1017 19:08:36.737409   85117 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1017 19:08:36.737412   85117 command_runner.go:130] > # shared_cpuset = ""
	I1017 19:08:36.737421   85117 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1017 19:08:36.737428   85117 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1017 19:08:36.737434   85117 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1017 19:08:36.737441   85117 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1017 19:08:36.737447   85117 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1017 19:08:36.737452   85117 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1017 19:08:36.737460   85117 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1017 19:08:36.737464   85117 command_runner.go:130] > # enable_criu_support = false
	I1017 19:08:36.737471   85117 command_runner.go:130] > # Enable/disable the generation of the container,
	I1017 19:08:36.737477   85117 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1017 19:08:36.737484   85117 command_runner.go:130] > # enable_pod_events = false
	I1017 19:08:36.737490   85117 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1017 19:08:36.737499   85117 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1017 19:08:36.737507   85117 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1017 19:08:36.737510   85117 command_runner.go:130] > # default_runtime = "runc"
	I1017 19:08:36.737518   85117 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1017 19:08:36.737525   85117 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1017 19:08:36.737537   85117 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1017 19:08:36.737545   85117 command_runner.go:130] > # creation as a file is not desired either.
	I1017 19:08:36.737567   85117 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1017 19:08:36.737578   85117 command_runner.go:130] > # the hostname is being managed dynamically.
	I1017 19:08:36.737585   85117 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1017 19:08:36.737590   85117 command_runner.go:130] > # ]
	I1017 19:08:36.737597   85117 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1017 19:08:36.737605   85117 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1017 19:08:36.737613   85117 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1017 19:08:36.737618   85117 command_runner.go:130] > # Each entry in the table should follow the format:
	I1017 19:08:36.737623   85117 command_runner.go:130] > #
	I1017 19:08:36.737628   85117 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1017 19:08:36.737635   85117 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1017 19:08:36.737639   85117 command_runner.go:130] > # runtime_type = "oci"
	I1017 19:08:36.737698   85117 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1017 19:08:36.737709   85117 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1017 19:08:36.737719   85117 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1017 19:08:36.737725   85117 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1017 19:08:36.737735   85117 command_runner.go:130] > # monitor_env = []
	I1017 19:08:36.737744   85117 command_runner.go:130] > # privileged_without_host_devices = false
	I1017 19:08:36.737748   85117 command_runner.go:130] > # allowed_annotations = []
	I1017 19:08:36.737754   85117 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1017 19:08:36.737763   85117 command_runner.go:130] > # Where:
	I1017 19:08:36.737771   85117 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1017 19:08:36.737778   85117 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1017 19:08:36.737786   85117 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1017 19:08:36.737794   85117 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1017 19:08:36.737798   85117 command_runner.go:130] > #   in $PATH.
	I1017 19:08:36.737803   85117 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1017 19:08:36.737810   85117 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1017 19:08:36.737816   85117 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1017 19:08:36.737821   85117 command_runner.go:130] > #   state.
	I1017 19:08:36.737828   85117 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1017 19:08:36.737836   85117 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1017 19:08:36.737842   85117 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1017 19:08:36.737849   85117 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1017 19:08:36.737856   85117 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1017 19:08:36.737865   85117 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1017 19:08:36.737872   85117 command_runner.go:130] > #   The currently recognized values are:
	I1017 19:08:36.737878   85117 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1017 19:08:36.737892   85117 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1017 19:08:36.737900   85117 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1017 19:08:36.737906   85117 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1017 19:08:36.737916   85117 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1017 19:08:36.737925   85117 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1017 19:08:36.737935   85117 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1017 19:08:36.737943   85117 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1017 19:08:36.737951   85117 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1017 19:08:36.737958   85117 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1017 19:08:36.737966   85117 command_runner.go:130] > #   deprecated option "conmon".
	I1017 19:08:36.737973   85117 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1017 19:08:36.737981   85117 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1017 19:08:36.737987   85117 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1017 19:08:36.737995   85117 command_runner.go:130] > #   should be moved to the container's cgroup
	I1017 19:08:36.738001   85117 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1017 19:08:36.738010   85117 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1017 19:08:36.738019   85117 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1017 19:08:36.738027   85117 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1017 19:08:36.738030   85117 command_runner.go:130] > #
	I1017 19:08:36.738038   85117 command_runner.go:130] > # Using the seccomp notifier feature:
	I1017 19:08:36.738041   85117 command_runner.go:130] > #
	I1017 19:08:36.738046   85117 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1017 19:08:36.738055   85117 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1017 19:08:36.738060   85117 command_runner.go:130] > #
	I1017 19:08:36.738067   85117 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1017 19:08:36.738075   85117 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1017 19:08:36.738080   85117 command_runner.go:130] > #
	I1017 19:08:36.738086   85117 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1017 19:08:36.738090   85117 command_runner.go:130] > # feature.
	I1017 19:08:36.738092   85117 command_runner.go:130] > #
	I1017 19:08:36.738100   85117 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1017 19:08:36.738108   85117 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1017 19:08:36.738114   85117 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1017 19:08:36.738123   85117 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1017 19:08:36.738132   85117 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1017 19:08:36.738137   85117 command_runner.go:130] > #
	I1017 19:08:36.738143   85117 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1017 19:08:36.738151   85117 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1017 19:08:36.738156   85117 command_runner.go:130] > #
	I1017 19:08:36.738162   85117 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1017 19:08:36.738169   85117 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1017 19:08:36.738172   85117 command_runner.go:130] > #
	I1017 19:08:36.738178   85117 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1017 19:08:36.738186   85117 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1017 19:08:36.738190   85117 command_runner.go:130] > # limitation.
	I1017 19:08:36.738198   85117 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1017 19:08:36.738202   85117 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1017 19:08:36.738212   85117 command_runner.go:130] > runtime_type = "oci"
	I1017 19:08:36.738218   85117 command_runner.go:130] > runtime_root = "/run/runc"
	I1017 19:08:36.738222   85117 command_runner.go:130] > runtime_config_path = ""
	I1017 19:08:36.738228   85117 command_runner.go:130] > monitor_path = "/usr/bin/conmon"
	I1017 19:08:36.738233   85117 command_runner.go:130] > monitor_cgroup = "pod"
	I1017 19:08:36.738239   85117 command_runner.go:130] > monitor_exec_cgroup = ""
	I1017 19:08:36.738242   85117 command_runner.go:130] > monitor_env = [
	I1017 19:08:36.738250   85117 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1017 19:08:36.738253   85117 command_runner.go:130] > ]
	I1017 19:08:36.738258   85117 command_runner.go:130] > privileged_without_host_devices = false
	I1017 19:08:36.738270   85117 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1017 19:08:36.738277   85117 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1017 19:08:36.738283   85117 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1017 19:08:36.738302   85117 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1017 19:08:36.738315   85117 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1017 19:08:36.738320   85117 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1017 19:08:36.738331   85117 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1017 19:08:36.738339   85117 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1017 19:08:36.738347   85117 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1017 19:08:36.738354   85117 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1017 19:08:36.738359   85117 command_runner.go:130] > # Example:
	I1017 19:08:36.738364   85117 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1017 19:08:36.738368   85117 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1017 19:08:36.738373   85117 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1017 19:08:36.738378   85117 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1017 19:08:36.738381   85117 command_runner.go:130] > # cpuset = 0
	I1017 19:08:36.738384   85117 command_runner.go:130] > # cpushares = "0-1"
	I1017 19:08:36.738388   85117 command_runner.go:130] > # Where:
	I1017 19:08:36.738392   85117 command_runner.go:130] > # The workload name is workload-type.
	I1017 19:08:36.738399   85117 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1017 19:08:36.738406   85117 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1017 19:08:36.738411   85117 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1017 19:08:36.738419   85117 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1017 19:08:36.738427   85117 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1017 19:08:36.738431   85117 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1017 19:08:36.738437   85117 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1017 19:08:36.738443   85117 command_runner.go:130] > # Default value is set to true
	I1017 19:08:36.738447   85117 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1017 19:08:36.738454   85117 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1017 19:08:36.738459   85117 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1017 19:08:36.738465   85117 command_runner.go:130] > # Default value is set to 'false'
	I1017 19:08:36.738470   85117 command_runner.go:130] > # disable_hostport_mapping = false
	I1017 19:08:36.738478   85117 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1017 19:08:36.738484   85117 command_runner.go:130] > #
	I1017 19:08:36.738489   85117 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1017 19:08:36.738500   85117 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1017 19:08:36.738508   85117 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1017 19:08:36.738517   85117 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1017 19:08:36.738522   85117 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1017 19:08:36.738529   85117 command_runner.go:130] > [crio.image]
	I1017 19:08:36.738535   85117 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1017 19:08:36.738541   85117 command_runner.go:130] > # default_transport = "docker://"
	I1017 19:08:36.738547   85117 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1017 19:08:36.738573   85117 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1017 19:08:36.738580   85117 command_runner.go:130] > # global_auth_file = ""
	I1017 19:08:36.738589   85117 command_runner.go:130] > # The image used to instantiate infra containers.
	I1017 19:08:36.738594   85117 command_runner.go:130] > # This option supports live configuration reload.
	I1017 19:08:36.738601   85117 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10.1"
	I1017 19:08:36.738608   85117 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1017 19:08:36.738616   85117 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1017 19:08:36.738622   85117 command_runner.go:130] > # This option supports live configuration reload.
	I1017 19:08:36.738626   85117 command_runner.go:130] > # pause_image_auth_file = ""
	I1017 19:08:36.738634   85117 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1017 19:08:36.738642   85117 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1017 19:08:36.738648   85117 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1017 19:08:36.738656   85117 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1017 19:08:36.738660   85117 command_runner.go:130] > # pause_command = "/pause"
	I1017 19:08:36.738668   85117 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1017 19:08:36.738674   85117 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1017 19:08:36.738690   85117 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1017 19:08:36.738700   85117 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1017 19:08:36.738709   85117 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1017 19:08:36.738718   85117 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1017 19:08:36.738722   85117 command_runner.go:130] > # pinned_images = [
	I1017 19:08:36.738727   85117 command_runner.go:130] > # ]
	I1017 19:08:36.738734   85117 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1017 19:08:36.738742   85117 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1017 19:08:36.738748   85117 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1017 19:08:36.738756   85117 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1017 19:08:36.738762   85117 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1017 19:08:36.738768   85117 command_runner.go:130] > # signature_policy = ""
	I1017 19:08:36.738773   85117 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1017 19:08:36.738781   85117 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1017 19:08:36.738787   85117 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1017 19:08:36.738792   85117 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1017 19:08:36.738798   85117 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1017 19:08:36.738802   85117 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1017 19:08:36.738808   85117 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1017 19:08:36.738813   85117 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1017 19:08:36.738817   85117 command_runner.go:130] > # changing them here.
	I1017 19:08:36.738820   85117 command_runner.go:130] > # insecure_registries = [
	I1017 19:08:36.738823   85117 command_runner.go:130] > # ]
	I1017 19:08:36.738828   85117 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1017 19:08:36.738833   85117 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1017 19:08:36.738836   85117 command_runner.go:130] > # image_volumes = "mkdir"
	I1017 19:08:36.738841   85117 command_runner.go:130] > # Temporary directory to use for storing big files
	I1017 19:08:36.738845   85117 command_runner.go:130] > # big_files_temporary_dir = ""
	I1017 19:08:36.738850   85117 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1017 19:08:36.738853   85117 command_runner.go:130] > # CNI plugins.
	I1017 19:08:36.738856   85117 command_runner.go:130] > [crio.network]
	I1017 19:08:36.738861   85117 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1017 19:08:36.738869   85117 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1017 19:08:36.738873   85117 command_runner.go:130] > # cni_default_network = ""
	I1017 19:08:36.738880   85117 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1017 19:08:36.738884   85117 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1017 19:08:36.738892   85117 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1017 19:08:36.738895   85117 command_runner.go:130] > # plugin_dirs = [
	I1017 19:08:36.738901   85117 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1017 19:08:36.738904   85117 command_runner.go:130] > # ]
	I1017 19:08:36.738909   85117 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1017 19:08:36.738915   85117 command_runner.go:130] > [crio.metrics]
	I1017 19:08:36.738919   85117 command_runner.go:130] > # Globally enable or disable metrics support.
	I1017 19:08:36.738925   85117 command_runner.go:130] > enable_metrics = true
	I1017 19:08:36.738929   85117 command_runner.go:130] > # Specify enabled metrics collectors.
	I1017 19:08:36.738939   85117 command_runner.go:130] > # Per default all metrics are enabled.
	I1017 19:08:36.738948   85117 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1017 19:08:36.738957   85117 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1017 19:08:36.738966   85117 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1017 19:08:36.738969   85117 command_runner.go:130] > # metrics_collectors = [
	I1017 19:08:36.738975   85117 command_runner.go:130] > # 	"operations",
	I1017 19:08:36.738980   85117 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1017 19:08:36.738988   85117 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1017 19:08:36.738992   85117 command_runner.go:130] > # 	"operations_errors",
	I1017 19:08:36.738998   85117 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1017 19:08:36.739002   85117 command_runner.go:130] > # 	"image_pulls_by_name",
	I1017 19:08:36.739008   85117 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1017 19:08:36.739012   85117 command_runner.go:130] > # 	"image_pulls_failures",
	I1017 19:08:36.739019   85117 command_runner.go:130] > # 	"image_pulls_successes",
	I1017 19:08:36.739022   85117 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1017 19:08:36.739029   85117 command_runner.go:130] > # 	"image_layer_reuse",
	I1017 19:08:36.739033   85117 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1017 19:08:36.739037   85117 command_runner.go:130] > # 	"containers_oom_total",
	I1017 19:08:36.739041   85117 command_runner.go:130] > # 	"containers_oom",
	I1017 19:08:36.739047   85117 command_runner.go:130] > # 	"processes_defunct",
	I1017 19:08:36.739050   85117 command_runner.go:130] > # 	"operations_total",
	I1017 19:08:36.739057   85117 command_runner.go:130] > # 	"operations_latency_seconds",
	I1017 19:08:36.739061   85117 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1017 19:08:36.739068   85117 command_runner.go:130] > # 	"operations_errors_total",
	I1017 19:08:36.739071   85117 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1017 19:08:36.739078   85117 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1017 19:08:36.739082   85117 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1017 19:08:36.739088   85117 command_runner.go:130] > # 	"image_pulls_success_total",
	I1017 19:08:36.739092   85117 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1017 19:08:36.739099   85117 command_runner.go:130] > # 	"containers_oom_count_total",
	I1017 19:08:36.739103   85117 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1017 19:08:36.739110   85117 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1017 19:08:36.739112   85117 command_runner.go:130] > # ]
	I1017 19:08:36.739119   85117 command_runner.go:130] > # The port on which the metrics server will listen.
	I1017 19:08:36.739125   85117 command_runner.go:130] > # metrics_port = 9090
	I1017 19:08:36.739132   85117 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1017 19:08:36.739136   85117 command_runner.go:130] > # metrics_socket = ""
	I1017 19:08:36.739143   85117 command_runner.go:130] > # The certificate for the secure metrics server.
	I1017 19:08:36.739148   85117 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1017 19:08:36.739156   85117 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1017 19:08:36.739161   85117 command_runner.go:130] > # certificate on any modification event.
	I1017 19:08:36.739165   85117 command_runner.go:130] > # metrics_cert = ""
	I1017 19:08:36.739170   85117 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1017 19:08:36.739176   85117 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1017 19:08:36.739180   85117 command_runner.go:130] > # metrics_key = ""
	I1017 19:08:36.739188   85117 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1017 19:08:36.739191   85117 command_runner.go:130] > [crio.tracing]
	I1017 19:08:36.739200   85117 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1017 19:08:36.739203   85117 command_runner.go:130] > # enable_tracing = false
	I1017 19:08:36.739214   85117 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1017 19:08:36.739221   85117 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1017 19:08:36.739227   85117 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1017 19:08:36.739240   85117 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1017 19:08:36.739246   85117 command_runner.go:130] > # CRI-O NRI configuration.
	I1017 19:08:36.739250   85117 command_runner.go:130] > [crio.nri]
	I1017 19:08:36.739254   85117 command_runner.go:130] > # Globally enable or disable NRI.
	I1017 19:08:36.739260   85117 command_runner.go:130] > # enable_nri = false
	I1017 19:08:36.739264   85117 command_runner.go:130] > # NRI socket to listen on.
	I1017 19:08:36.739271   85117 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1017 19:08:36.739275   85117 command_runner.go:130] > # NRI plugin directory to use.
	I1017 19:08:36.739280   85117 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1017 19:08:36.739287   85117 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1017 19:08:36.739291   85117 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1017 19:08:36.739299   85117 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1017 19:08:36.739303   85117 command_runner.go:130] > # nri_disable_connections = false
	I1017 19:08:36.739310   85117 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1017 19:08:36.739315   85117 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1017 19:08:36.739325   85117 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1017 19:08:36.739332   85117 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1017 19:08:36.739337   85117 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1017 19:08:36.739343   85117 command_runner.go:130] > [crio.stats]
	I1017 19:08:36.739348   85117 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1017 19:08:36.739353   85117 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1017 19:08:36.739360   85117 command_runner.go:130] > # stats_collection_period = 0
	I1017 19:08:36.739439   85117 cni.go:84] Creating CNI manager for ""
	I1017 19:08:36.739451   85117 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1017 19:08:36.739480   85117 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 19:08:36.739504   85117 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.205 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-016863 NodeName:functional-016863 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.205"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.205 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 19:08:36.739644   85117 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.205
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-016863"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.205"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.205"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 19:08:36.739707   85117 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:08:36.752377   85117 command_runner.go:130] > kubeadm
	I1017 19:08:36.752404   85117 command_runner.go:130] > kubectl
	I1017 19:08:36.752408   85117 command_runner.go:130] > kubelet
	I1017 19:08:36.752864   85117 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:08:36.752933   85117 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 19:08:36.764722   85117 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1017 19:08:36.786673   85117 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:08:36.808021   85117 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1017 19:08:36.828821   85117 ssh_runner.go:195] Run: grep 192.168.39.205	control-plane.minikube.internal$ /etc/hosts
	I1017 19:08:36.833177   85117 command_runner.go:130] > 192.168.39.205	control-plane.minikube.internal
	I1017 19:08:36.833246   85117 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:08:37.010934   85117 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:08:37.030439   85117 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863 for IP: 192.168.39.205
	I1017 19:08:37.030467   85117 certs.go:195] generating shared ca certs ...
	I1017 19:08:37.030485   85117 certs.go:227] acquiring lock for ca certs: {Name:mka410ab7d3b92eaaa0d0545223807c0ba196baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:08:37.030690   85117 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-75534/.minikube/ca.key
	I1017 19:08:37.030747   85117 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-75534/.minikube/proxy-client-ca.key
	I1017 19:08:37.030762   85117 certs.go:257] generating profile certs ...
	I1017 19:08:37.030878   85117 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/client.key
	I1017 19:08:37.030972   85117 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/apiserver.key.c24585d5
	I1017 19:08:37.031049   85117 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/proxy-client.key
	I1017 19:08:37.031067   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1017 19:08:37.031086   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1017 19:08:37.031102   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1017 19:08:37.031121   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1017 19:08:37.031138   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1017 19:08:37.031155   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1017 19:08:37.031179   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1017 19:08:37.031195   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1017 19:08:37.031270   85117 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/79439.pem (1338 bytes)
	W1017 19:08:37.031314   85117 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-75534/.minikube/certs/79439_empty.pem, impossibly tiny 0 bytes
	I1017 19:08:37.031328   85117 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca-key.pem (1679 bytes)
	I1017 19:08:37.031364   85117 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem (1082 bytes)
	I1017 19:08:37.031395   85117 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:08:37.031426   85117 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/key.pem (1679 bytes)
	I1017 19:08:37.031478   85117 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/ssl/certs/794392.pem (1708 bytes)
	I1017 19:08:37.031518   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/ssl/certs/794392.pem -> /usr/share/ca-certificates/794392.pem
	I1017 19:08:37.031537   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:08:37.031564   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/79439.pem -> /usr/share/ca-certificates/79439.pem
	I1017 19:08:37.032341   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:08:37.064212   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 19:08:37.094935   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:08:37.126973   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 19:08:37.157540   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1017 19:08:37.187168   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 19:08:37.217543   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 19:08:37.247400   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 19:08:37.278758   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/ssl/certs/794392.pem --> /usr/share/ca-certificates/794392.pem (1708 bytes)
	I1017 19:08:37.308088   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:08:37.338377   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/certs/79439.pem --> /usr/share/ca-certificates/79439.pem (1338 bytes)
	I1017 19:08:37.369350   85117 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 19:08:37.390154   85117 ssh_runner.go:195] Run: openssl version
	I1017 19:08:37.397183   85117 command_runner.go:130] > OpenSSL 3.4.1 11 Feb 2025 (Library: OpenSSL 3.4.1 11 Feb 2025)
	I1017 19:08:37.397310   85117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/79439.pem && ln -fs /usr/share/ca-certificates/79439.pem /etc/ssl/certs/79439.pem"
	I1017 19:08:37.411628   85117 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/79439.pem
	I1017 19:08:37.417085   85117 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 17 19:05 /usr/share/ca-certificates/79439.pem
	I1017 19:08:37.417178   85117 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:05 /usr/share/ca-certificates/79439.pem
	I1017 19:08:37.417250   85117 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/79439.pem
	I1017 19:08:37.424962   85117 command_runner.go:130] > 51391683
	I1017 19:08:37.425158   85117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/79439.pem /etc/ssl/certs/51391683.0"
	I1017 19:08:37.437578   85117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/794392.pem && ln -fs /usr/share/ca-certificates/794392.pem /etc/ssl/certs/794392.pem"
	I1017 19:08:37.452363   85117 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/794392.pem
	I1017 19:08:37.458096   85117 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 17 19:05 /usr/share/ca-certificates/794392.pem
	I1017 19:08:37.458164   85117 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:05 /usr/share/ca-certificates/794392.pem
	I1017 19:08:37.458223   85117 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/794392.pem
	I1017 19:08:37.466074   85117 command_runner.go:130] > 3ec20f2e
	I1017 19:08:37.466249   85117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/794392.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 19:08:37.478828   85117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:08:37.493772   85117 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:08:37.499621   85117 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 17 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:08:37.499822   85117 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:08:37.499886   85117 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:08:37.507945   85117 command_runner.go:130] > b5213941
	I1017 19:08:37.508223   85117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:08:37.520563   85117 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:08:37.526401   85117 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:08:37.526439   85117 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1017 19:08:37.526449   85117 command_runner.go:130] > Device: 253,1	Inode: 1054372     Links: 1
	I1017 19:08:37.526460   85117 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1017 19:08:37.526477   85117 command_runner.go:130] > Access: 2025-10-17 19:06:07.267694920 +0000
	I1017 19:08:37.526489   85117 command_runner.go:130] > Modify: 2025-10-17 19:06:07.267694920 +0000
	I1017 19:08:37.526500   85117 command_runner.go:130] > Change: 2025-10-17 19:06:07.267694920 +0000
	I1017 19:08:37.526510   85117 command_runner.go:130] >  Birth: 2025-10-17 19:06:07.267694920 +0000
	I1017 19:08:37.526610   85117 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 19:08:37.533974   85117 command_runner.go:130] > Certificate will not expire
	I1017 19:08:37.534188   85117 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 19:08:37.541725   85117 command_runner.go:130] > Certificate will not expire
	I1017 19:08:37.541833   85117 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 19:08:37.549277   85117 command_runner.go:130] > Certificate will not expire
	I1017 19:08:37.549348   85117 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 19:08:37.556865   85117 command_runner.go:130] > Certificate will not expire
	I1017 19:08:37.556943   85117 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 19:08:37.564379   85117 command_runner.go:130] > Certificate will not expire
	I1017 19:08:37.564452   85117 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 19:08:37.571575   85117 command_runner.go:130] > Certificate will not expire
	I1017 19:08:37.571807   85117 kubeadm.go:400] StartCluster: {Name:functional-016863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.1 ClusterName:functional-016863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:08:37.571943   85117 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:08:37.572009   85117 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:08:37.614275   85117 command_runner.go:130] > 5052ee3b4b13e54f7516a211d580d31d7e4856f34ebe5b5bc8a1778244018fb0
	I1017 19:08:37.614306   85117 command_runner.go:130] > 56c02355399031d32d66f1780ee1bc7396eeb5eb1b454f946254fe345879e8e0
	I1017 19:08:37.614315   85117 command_runner.go:130] > 56048147246b1d30ce16d066a4bbb216f1f7c9b1459e21fa60ee108fdd3aa42a
	I1017 19:08:37.614325   85117 command_runner.go:130] > 1b1f7dfe245a6d20e55f02381f27ec11e1eec3bf32b8112aaab88ea95c008e93
	I1017 19:08:37.614332   85117 command_runner.go:130] > b4db2cb7b47399fb64d0f31922d185a1ae009961ae05b56d9514db6f489a25eb
	I1017 19:08:37.614340   85117 command_runner.go:130] > d6eeaf9720fb0a5853cabba8afe0f0c64370fd422e21db9af2a9b6ce4b9aecc1
	I1017 19:08:37.614347   85117 command_runner.go:130] > 26c7a235bb67e91ab6abf0c0282c65a526c3d2fc628ec6008956402a02d5b1e8
	I1017 19:08:37.614369   85117 command_runner.go:130] > 171e623260fdb36d39493caf1c0b8c10efb097287233e2565304b12ece716a85
	I1017 19:08:37.614383   85117 command_runner.go:130] > 0fe4cc88e7a7a757f4debf7f3ff8f76bef81d0a36e83bd994df86baa42f47a71
	I1017 19:08:37.614397   85117 command_runner.go:130] > 86dba9687f70280ffaa952d354e90ec1a4ff74d73869d9360c56690901ad9461
	I1017 19:08:37.614406   85117 command_runner.go:130] > 4d4ae675fa012cf6e18dd10516f8c83d32b364f3e27d8068722234a797bc7b1a
	I1017 19:08:37.614460   85117 cri.go:89] found id: "5052ee3b4b13e54f7516a211d580d31d7e4856f34ebe5b5bc8a1778244018fb0"
	I1017 19:08:37.614475   85117 cri.go:89] found id: "56c02355399031d32d66f1780ee1bc7396eeb5eb1b454f946254fe345879e8e0"
	I1017 19:08:37.614481   85117 cri.go:89] found id: "56048147246b1d30ce16d066a4bbb216f1f7c9b1459e21fa60ee108fdd3aa42a"
	I1017 19:08:37.614486   85117 cri.go:89] found id: "1b1f7dfe245a6d20e55f02381f27ec11e1eec3bf32b8112aaab88ea95c008e93"
	I1017 19:08:37.614490   85117 cri.go:89] found id: "b4db2cb7b47399fb64d0f31922d185a1ae009961ae05b56d9514db6f489a25eb"
	I1017 19:08:37.614498   85117 cri.go:89] found id: "d6eeaf9720fb0a5853cabba8afe0f0c64370fd422e21db9af2a9b6ce4b9aecc1"
	I1017 19:08:37.614513   85117 cri.go:89] found id: "26c7a235bb67e91ab6abf0c0282c65a526c3d2fc628ec6008956402a02d5b1e8"
	I1017 19:08:37.614519   85117 cri.go:89] found id: "171e623260fdb36d39493caf1c0b8c10efb097287233e2565304b12ece716a85"
	I1017 19:08:37.614521   85117 cri.go:89] found id: "0fe4cc88e7a7a757f4debf7f3ff8f76bef81d0a36e83bd994df86baa42f47a71"
	I1017 19:08:37.614530   85117 cri.go:89] found id: "86dba9687f70280ffaa952d354e90ec1a4ff74d73869d9360c56690901ad9461"
	I1017 19:08:37.614535   85117 cri.go:89] found id: "4d4ae675fa012cf6e18dd10516f8c83d32b364f3e27d8068722234a797bc7b1a"
	I1017 19:08:37.614538   85117 cri.go:89] found id: ""
	I1017 19:08:37.614600   85117 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-016863 -n functional-016863
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-016863 -n functional-016863: exit status 2 (246.924629ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-016863" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (394.04s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (393.4s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-016863 kubectl -- --context functional-016863 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-016863 kubectl -- --context functional-016863 get pods: exit status 1 (98.632593ms)

                                                
                                                
** stderr ** 
	E1017 19:34:12.147246   91755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.205:8441/api?timeout=32s\": dial tcp 192.168.39.205:8441: connect: connection refused"
	E1017 19:34:12.147767   91755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.205:8441/api?timeout=32s\": dial tcp 192.168.39.205:8441: connect: connection refused"
	E1017 19:34:12.150088   91755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.205:8441/api?timeout=32s\": dial tcp 192.168.39.205:8441: connect: connection refused"
	E1017 19:34:12.150486   91755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.205:8441/api?timeout=32s\": dial tcp 192.168.39.205:8441: connect: connection refused"
	E1017 19:34:12.151995   91755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.205:8441/api?timeout=32s\": dial tcp 192.168.39.205:8441: connect: connection refused"
	The connection to the server 192.168.39.205:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-amd64 -p functional-016863 kubectl -- --context functional-016863 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-016863 -n functional-016863
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-016863 -n functional-016863: exit status 2 (227.295483ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-016863 logs -n 25
E1017 19:36:50.817609   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:38:47.750829   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-016863 logs -n 25: (6m32.777032869s)
helpers_test.go:260: TestFunctional/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                  ARGS                                                                   │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ pause   │ nospam-712449 --log_dir /tmp/nospam-712449 pause                                                                                        │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:04 UTC │ 17 Oct 25 19:04 UTC │
	│ unpause │ nospam-712449 --log_dir /tmp/nospam-712449 unpause                                                                                      │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:04 UTC │ 17 Oct 25 19:04 UTC │
	│ unpause │ nospam-712449 --log_dir /tmp/nospam-712449 unpause                                                                                      │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:04 UTC │ 17 Oct 25 19:04 UTC │
	│ unpause │ nospam-712449 --log_dir /tmp/nospam-712449 unpause                                                                                      │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:04 UTC │ 17 Oct 25 19:04 UTC │
	│ stop    │ nospam-712449 --log_dir /tmp/nospam-712449 stop                                                                                         │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:04 UTC │ 17 Oct 25 19:05 UTC │
	│ stop    │ nospam-712449 --log_dir /tmp/nospam-712449 stop                                                                                         │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:05 UTC │ 17 Oct 25 19:05 UTC │
	│ stop    │ nospam-712449 --log_dir /tmp/nospam-712449 stop                                                                                         │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:05 UTC │ 17 Oct 25 19:05 UTC │
	│ delete  │ -p nospam-712449                                                                                                                        │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:05 UTC │ 17 Oct 25 19:05 UTC │
	│ start   │ -p functional-016863 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ functional-016863 │ jenkins │ v1.37.0 │ 17 Oct 25 19:05 UTC │ 17 Oct 25 19:06 UTC │
	│ start   │ -p functional-016863 --alsologtostderr -v=8                                                                                             │ functional-016863 │ jenkins │ v1.37.0 │ 17 Oct 25 19:06 UTC │                     │
	│ cache   │ functional-016863 cache add registry.k8s.io/pause:3.1                                                                                   │ functional-016863 │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │ 17 Oct 25 19:34 UTC │
	│ cache   │ functional-016863 cache add registry.k8s.io/pause:3.3                                                                                   │ functional-016863 │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │ 17 Oct 25 19:34 UTC │
	│ cache   │ functional-016863 cache add registry.k8s.io/pause:latest                                                                                │ functional-016863 │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │ 17 Oct 25 19:34 UTC │
	│ cache   │ functional-016863 cache add minikube-local-cache-test:functional-016863                                                                 │ functional-016863 │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │ 17 Oct 25 19:34 UTC │
	│ cache   │ functional-016863 cache delete minikube-local-cache-test:functional-016863                                                              │ functional-016863 │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │ 17 Oct 25 19:34 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │ 17 Oct 25 19:34 UTC │
	│ cache   │ list                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │ 17 Oct 25 19:34 UTC │
	│ ssh     │ functional-016863 ssh sudo crictl images                                                                                                │ functional-016863 │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │ 17 Oct 25 19:34 UTC │
	│ ssh     │ functional-016863 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                      │ functional-016863 │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │ 17 Oct 25 19:34 UTC │
	│ ssh     │ functional-016863 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                 │ functional-016863 │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │                     │
	│ cache   │ functional-016863 cache reload                                                                                                          │ functional-016863 │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │ 17 Oct 25 19:34 UTC │
	│ ssh     │ functional-016863 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                 │ functional-016863 │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │ 17 Oct 25 19:34 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │ 17 Oct 25 19:34 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │ 17 Oct 25 19:34 UTC │
	│ kubectl │ functional-016863 kubectl -- --context functional-016863 get pods                                                                       │ functional-016863 │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:06:56
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:06:56.570682   85117 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:06:56.570809   85117 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:06:56.570820   85117 out.go:374] Setting ErrFile to fd 2...
	I1017 19:06:56.570826   85117 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:06:56.571105   85117 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-75534/.minikube/bin
	I1017 19:06:56.571578   85117 out.go:368] Setting JSON to false
	I1017 19:06:56.572426   85117 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6568,"bootTime":1760721449,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 19:06:56.572524   85117 start.go:141] virtualization: kvm guest
	I1017 19:06:56.574519   85117 out.go:179] * [functional-016863] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 19:06:56.575690   85117 notify.go:220] Checking for updates...
	I1017 19:06:56.575704   85117 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 19:06:56.577138   85117 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:06:56.578363   85117 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-75534/kubeconfig
	I1017 19:06:56.579669   85117 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-75534/.minikube
	I1017 19:06:56.581027   85117 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 19:06:56.582307   85117 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:06:56.583921   85117 config.go:182] Loaded profile config "functional-016863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:06:56.584037   85117 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:06:56.584492   85117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:06:56.584589   85117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:06:56.600478   85117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35877
	I1017 19:06:56.600991   85117 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:06:56.601750   85117 main.go:141] libmachine: Using API Version  1
	I1017 19:06:56.601786   85117 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:06:56.602161   85117 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:06:56.602390   85117 main.go:141] libmachine: (functional-016863) Calling .DriverName
	I1017 19:06:56.635697   85117 out.go:179] * Using the kvm2 driver based on existing profile
	I1017 19:06:56.637016   85117 start.go:305] selected driver: kvm2
	I1017 19:06:56.637040   85117 start.go:925] validating driver "kvm2" against &{Name:functional-016863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-016863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:06:56.637141   85117 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:06:56.637622   85117 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:06:56.637712   85117 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21753-75534/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1017 19:06:56.651574   85117 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1017 19:06:56.651619   85117 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21753-75534/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1017 19:06:56.665844   85117 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1017 19:06:56.666547   85117 cni.go:84] Creating CNI manager for ""
	I1017 19:06:56.666631   85117 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1017 19:06:56.666699   85117 start.go:349] cluster config:
	{Name:functional-016863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-016863 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:06:56.666812   85117 iso.go:125] acquiring lock: {Name:mk89d24a0bd9a0a8cf0564a4affa55e11eaff101 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:06:56.668638   85117 out.go:179] * Starting "functional-016863" primary control-plane node in "functional-016863" cluster
	I1017 19:06:56.669893   85117 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:06:56.669940   85117 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-75534/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 19:06:56.669951   85117 cache.go:58] Caching tarball of preloaded images
	I1017 19:06:56.670102   85117 preload.go:233] Found /home/jenkins/minikube-integration/21753-75534/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1017 19:06:56.670116   85117 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:06:56.670235   85117 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/config.json ...
	I1017 19:06:56.670445   85117 start.go:360] acquireMachinesLock for functional-016863: {Name:mke0c3abe726945d0c60793aa0bf26eb33df7fed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1017 19:06:56.670494   85117 start.go:364] duration metric: took 29.325µs to acquireMachinesLock for "functional-016863"
	I1017 19:06:56.670514   85117 start.go:96] Skipping create...Using existing machine configuration
	I1017 19:06:56.670524   85117 fix.go:54] fixHost starting: 
	I1017 19:06:56.670828   85117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:06:56.670877   85117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:06:56.683516   85117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42095
	I1017 19:06:56.683978   85117 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:06:56.684470   85117 main.go:141] libmachine: Using API Version  1
	I1017 19:06:56.684493   85117 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:06:56.684844   85117 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:06:56.685047   85117 main.go:141] libmachine: (functional-016863) Calling .DriverName
	I1017 19:06:56.685223   85117 main.go:141] libmachine: (functional-016863) Calling .GetState
	I1017 19:06:56.686913   85117 fix.go:112] recreateIfNeeded on functional-016863: state=Running err=<nil>
	W1017 19:06:56.686945   85117 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 19:06:56.688754   85117 out.go:252] * Updating the running kvm2 "functional-016863" VM ...
	I1017 19:06:56.688779   85117 machine.go:93] provisionDockerMachine start ...
	I1017 19:06:56.688795   85117 main.go:141] libmachine: (functional-016863) Calling .DriverName
	I1017 19:06:56.689021   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:06:56.691985   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:56.692501   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:06:56.692527   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:56.692713   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:06:56.692904   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:56.693142   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:56.693299   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:06:56.693474   85117 main.go:141] libmachine: Using SSH client type: native
	I1017 19:06:56.693724   85117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1017 19:06:56.693736   85117 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:06:56.799511   85117 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-016863
	
	I1017 19:06:56.799542   85117 main.go:141] libmachine: (functional-016863) Calling .GetMachineName
	I1017 19:06:56.799819   85117 buildroot.go:166] provisioning hostname "functional-016863"
	I1017 19:06:56.799862   85117 main.go:141] libmachine: (functional-016863) Calling .GetMachineName
	I1017 19:06:56.800154   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:06:56.803810   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:56.804342   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:06:56.804375   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:56.804593   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:06:56.804779   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:56.804950   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:56.805112   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:06:56.805279   85117 main.go:141] libmachine: Using SSH client type: native
	I1017 19:06:56.805490   85117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1017 19:06:56.805503   85117 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-016863 && echo "functional-016863" | sudo tee /etc/hostname
	I1017 19:06:56.929174   85117 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-016863
	
	I1017 19:06:56.929205   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:06:56.932429   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:56.932929   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:06:56.932954   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:56.933186   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:06:56.933423   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:56.933612   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:56.933826   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:06:56.934076   85117 main.go:141] libmachine: Using SSH client type: native
	I1017 19:06:56.934309   85117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1017 19:06:56.934326   85117 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-016863' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-016863/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-016863' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:06:57.042297   85117 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:06:57.042330   85117 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21753-75534/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-75534/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-75534/.minikube}
	I1017 19:06:57.042373   85117 buildroot.go:174] setting up certificates
	I1017 19:06:57.042382   85117 provision.go:84] configureAuth start
	I1017 19:06:57.042395   85117 main.go:141] libmachine: (functional-016863) Calling .GetMachineName
	I1017 19:06:57.042715   85117 main.go:141] libmachine: (functional-016863) Calling .GetIP
	I1017 19:06:57.045902   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:57.046469   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:06:57.046508   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:57.046778   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:06:57.049360   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:57.049857   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:06:57.049902   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:57.050076   85117 provision.go:143] copyHostCerts
	I1017 19:06:57.050123   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21753-75534/.minikube/ca.pem
	I1017 19:06:57.050183   85117 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-75534/.minikube/ca.pem, removing ...
	I1017 19:06:57.050205   85117 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-75534/.minikube/ca.pem
	I1017 19:06:57.050294   85117 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-75534/.minikube/ca.pem (1082 bytes)
	I1017 19:06:57.050425   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21753-75534/.minikube/cert.pem
	I1017 19:06:57.050463   85117 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-75534/.minikube/cert.pem, removing ...
	I1017 19:06:57.050473   85117 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-75534/.minikube/cert.pem
	I1017 19:06:57.050602   85117 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-75534/.minikube/cert.pem (1123 bytes)
	I1017 19:06:57.050772   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21753-75534/.minikube/key.pem
	I1017 19:06:57.050815   85117 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-75534/.minikube/key.pem, removing ...
	I1017 19:06:57.050825   85117 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-75534/.minikube/key.pem
	I1017 19:06:57.050881   85117 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-75534/.minikube/key.pem (1679 bytes)
	I1017 19:06:57.051013   85117 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-75534/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca-key.pem org=jenkins.functional-016863 san=[127.0.0.1 192.168.39.205 functional-016863 localhost minikube]
	I1017 19:06:57.269277   85117 provision.go:177] copyRemoteCerts
	I1017 19:06:57.269362   85117 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:06:57.269401   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:06:57.272458   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:57.272834   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:06:57.272866   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:57.273060   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:06:57.273266   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:57.273480   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:06:57.273640   85117 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/functional-016863/id_rsa Username:docker}
	I1017 19:06:57.362432   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1017 19:06:57.362511   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 19:06:57.412884   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1017 19:06:57.413107   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1017 19:06:57.450092   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1017 19:06:57.450212   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1017 19:06:57.486026   85117 provision.go:87] duration metric: took 443.605637ms to configureAuth
	I1017 19:06:57.486057   85117 buildroot.go:189] setting minikube options for container-runtime
	I1017 19:06:57.486228   85117 config.go:182] Loaded profile config "functional-016863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:06:57.486309   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:06:57.489476   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:57.489895   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:06:57.489928   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:57.490160   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:06:57.490354   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:57.490544   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:57.490703   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:06:57.490888   85117 main.go:141] libmachine: Using SSH client type: native
	I1017 19:06:57.491101   85117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1017 19:06:57.491114   85117 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:07:03.084984   85117 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:07:03.085021   85117 machine.go:96] duration metric: took 6.396234121s to provisionDockerMachine
	I1017 19:07:03.085042   85117 start.go:293] postStartSetup for "functional-016863" (driver="kvm2")
	I1017 19:07:03.085056   85117 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:07:03.085084   85117 main.go:141] libmachine: (functional-016863) Calling .DriverName
	I1017 19:07:03.085514   85117 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:07:03.085593   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:07:03.089211   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.089621   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:07:03.089655   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.089838   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:07:03.090055   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:07:03.090184   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:07:03.090354   85117 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/functional-016863/id_rsa Username:docker}
	I1017 19:07:03.173813   85117 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:07:03.179411   85117 command_runner.go:130] > NAME=Buildroot
	I1017 19:07:03.179437   85117 command_runner.go:130] > VERSION=2025.02-dirty
	I1017 19:07:03.179441   85117 command_runner.go:130] > ID=buildroot
	I1017 19:07:03.179446   85117 command_runner.go:130] > VERSION_ID=2025.02
	I1017 19:07:03.179452   85117 command_runner.go:130] > PRETTY_NAME="Buildroot 2025.02"
	I1017 19:07:03.179493   85117 info.go:137] Remote host: Buildroot 2025.02
	I1017 19:07:03.179508   85117 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-75534/.minikube/addons for local assets ...
	I1017 19:07:03.179595   85117 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-75534/.minikube/files for local assets ...
	I1017 19:07:03.179714   85117 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/ssl/certs/794392.pem -> 794392.pem in /etc/ssl/certs
	I1017 19:07:03.179729   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/ssl/certs/794392.pem -> /etc/ssl/certs/794392.pem
	I1017 19:07:03.179835   85117 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/test/nested/copy/79439/hosts -> hosts in /etc/test/nested/copy/79439
	I1017 19:07:03.179847   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/test/nested/copy/79439/hosts -> /etc/test/nested/copy/79439/hosts
	I1017 19:07:03.179893   85117 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/79439
	I1017 19:07:03.192128   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/ssl/certs/794392.pem --> /etc/ssl/certs/794392.pem (1708 bytes)
	I1017 19:07:03.223838   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/test/nested/copy/79439/hosts --> /etc/test/nested/copy/79439/hosts (40 bytes)
	I1017 19:07:03.313679   85117 start.go:296] duration metric: took 228.61978ms for postStartSetup
	I1017 19:07:03.313721   85117 fix.go:56] duration metric: took 6.643198174s for fixHost
	I1017 19:07:03.313742   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:07:03.317578   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.318077   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:07:03.318115   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.318367   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:07:03.318648   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:07:03.318838   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:07:03.319029   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:07:03.319295   85117 main.go:141] libmachine: Using SSH client type: native
	I1017 19:07:03.319597   85117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1017 19:07:03.319613   85117 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1017 19:07:03.479608   85117 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760728023.470011514
	
	I1017 19:07:03.479635   85117 fix.go:216] guest clock: 1760728023.470011514
	I1017 19:07:03.479642   85117 fix.go:229] Guest: 2025-10-17 19:07:03.470011514 +0000 UTC Remote: 2025-10-17 19:07:03.313724873 +0000 UTC m=+6.781586281 (delta=156.286641ms)
	I1017 19:07:03.479664   85117 fix.go:200] guest clock delta is within tolerance: 156.286641ms
	I1017 19:07:03.479671   85117 start.go:83] releasing machines lock for "functional-016863", held for 6.809163445s
	I1017 19:07:03.479692   85117 main.go:141] libmachine: (functional-016863) Calling .DriverName
	I1017 19:07:03.480016   85117 main.go:141] libmachine: (functional-016863) Calling .GetIP
	I1017 19:07:03.483255   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.483786   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:07:03.483830   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.484026   85117 main.go:141] libmachine: (functional-016863) Calling .DriverName
	I1017 19:07:03.484650   85117 main.go:141] libmachine: (functional-016863) Calling .DriverName
	I1017 19:07:03.484910   85117 main.go:141] libmachine: (functional-016863) Calling .DriverName
	I1017 19:07:03.485041   85117 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:07:03.485087   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:07:03.485146   85117 ssh_runner.go:195] Run: cat /version.json
	I1017 19:07:03.485170   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:07:03.488247   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.488613   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.488732   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:07:03.488760   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.488948   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:07:03.489117   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:07:03.489150   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:07:03.489166   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.489373   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:07:03.489440   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:07:03.489584   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:07:03.489660   85117 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/functional-016863/id_rsa Username:docker}
	I1017 19:07:03.489750   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:07:03.489896   85117 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/functional-016863/id_rsa Username:docker}
	I1017 19:07:03.669674   85117 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1017 19:07:03.669755   85117 command_runner.go:130] > {"iso_version": "v1.37.0-1760609724-21757", "kicbase_version": "v0.0.48-1760363564-21724", "minikube_version": "v1.37.0", "commit": "fd6729aa481bc45098452b0ed0ffbe097c29d1bb"}
	I1017 19:07:03.669885   85117 ssh_runner.go:195] Run: systemctl --version
	I1017 19:07:03.691813   85117 command_runner.go:130] > systemd 256 (256.7)
	I1017 19:07:03.691879   85117 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP -LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT -LIBARCHIVE
	I1017 19:07:03.691965   85117 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:07:03.942910   85117 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1017 19:07:03.963385   85117 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1017 19:07:03.963654   85117 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:07:03.963723   85117 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:07:04.004504   85117 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 19:07:04.004543   85117 start.go:495] detecting cgroup driver to use...
	I1017 19:07:04.004649   85117 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:07:04.048623   85117 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:07:04.093677   85117 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:07:04.093751   85117 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:07:04.125946   85117 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:07:04.177031   85117 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:07:04.556434   85117 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:07:04.871840   85117 docker.go:234] disabling docker service ...
	I1017 19:07:04.871920   85117 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:07:04.914455   85117 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:07:04.944209   85117 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:07:05.273173   85117 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:07:05.563772   85117 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:07:05.602259   85117 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:07:05.639391   85117 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1017 19:07:05.639452   85117 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:07:05.639509   85117 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:07:05.662293   85117 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 19:07:05.662360   85117 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:07:05.681766   85117 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:07:05.702415   85117 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:07:05.723309   85117 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:07:05.743334   85117 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:07:05.758794   85117 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:07:05.777348   85117 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:07:05.792297   85117 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:07:05.810337   85117 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1017 19:07:05.810427   85117 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:07:05.829378   85117 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:07:06.061473   85117 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:08:36.459335   85117 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.39776602s)
	I1017 19:08:36.459402   85117 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:08:36.459487   85117 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:08:36.466176   85117 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1017 19:08:36.466208   85117 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1017 19:08:36.466216   85117 command_runner.go:130] > Device: 0,23	Inode: 1978        Links: 1
	I1017 19:08:36.466222   85117 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1017 19:08:36.466229   85117 command_runner.go:130] > Access: 2025-10-17 19:08:36.354383352 +0000
	I1017 19:08:36.466239   85117 command_runner.go:130] > Modify: 2025-10-17 19:08:36.274379788 +0000
	I1017 19:08:36.466245   85117 command_runner.go:130] > Change: 2025-10-17 19:08:36.274379788 +0000
	I1017 19:08:36.466267   85117 command_runner.go:130] >  Birth: 2025-10-17 19:08:36.274379788 +0000
	I1017 19:08:36.466319   85117 start.go:563] Will wait 60s for crictl version
	I1017 19:08:36.466390   85117 ssh_runner.go:195] Run: which crictl
	I1017 19:08:36.470951   85117 command_runner.go:130] > /usr/bin/crictl
	I1017 19:08:36.471037   85117 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1017 19:08:36.516077   85117 command_runner.go:130] > Version:  0.1.0
	I1017 19:08:36.516101   85117 command_runner.go:130] > RuntimeName:  cri-o
	I1017 19:08:36.516106   85117 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1017 19:08:36.516111   85117 command_runner.go:130] > RuntimeApiVersion:  v1
	I1017 19:08:36.516132   85117 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1017 19:08:36.516223   85117 ssh_runner.go:195] Run: crio --version
	I1017 19:08:36.548879   85117 command_runner.go:130] > crio version 1.29.1
	I1017 19:08:36.548904   85117 command_runner.go:130] > Version:        1.29.1
	I1017 19:08:36.548909   85117 command_runner.go:130] > GitCommit:      unknown
	I1017 19:08:36.548925   85117 command_runner.go:130] > GitCommitDate:  unknown
	I1017 19:08:36.548929   85117 command_runner.go:130] > GitTreeState:   clean
	I1017 19:08:36.548935   85117 command_runner.go:130] > BuildDate:      2025-10-16T13:23:57Z
	I1017 19:08:36.548939   85117 command_runner.go:130] > GoVersion:      go1.23.4
	I1017 19:08:36.548942   85117 command_runner.go:130] > Compiler:       gc
	I1017 19:08:36.548947   85117 command_runner.go:130] > Platform:       linux/amd64
	I1017 19:08:36.548951   85117 command_runner.go:130] > Linkmode:       dynamic
	I1017 19:08:36.548955   85117 command_runner.go:130] > BuildTags:      
	I1017 19:08:36.548959   85117 command_runner.go:130] >   containers_image_ostree_stub
	I1017 19:08:36.548963   85117 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1017 19:08:36.548966   85117 command_runner.go:130] >   btrfs_noversion
	I1017 19:08:36.548970   85117 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1017 19:08:36.548974   85117 command_runner.go:130] >   libdm_no_deferred_remove
	I1017 19:08:36.548978   85117 command_runner.go:130] >   seccomp
	I1017 19:08:36.548982   85117 command_runner.go:130] > LDFlags:          unknown
	I1017 19:08:36.549001   85117 command_runner.go:130] > SeccompEnabled:   true
	I1017 19:08:36.549005   85117 command_runner.go:130] > AppArmorEnabled:  false
	I1017 19:08:36.549081   85117 ssh_runner.go:195] Run: crio --version
	I1017 19:08:36.579072   85117 command_runner.go:130] > crio version 1.29.1
	I1017 19:08:36.579097   85117 command_runner.go:130] > Version:        1.29.1
	I1017 19:08:36.579102   85117 command_runner.go:130] > GitCommit:      unknown
	I1017 19:08:36.579106   85117 command_runner.go:130] > GitCommitDate:  unknown
	I1017 19:08:36.579109   85117 command_runner.go:130] > GitTreeState:   clean
	I1017 19:08:36.579114   85117 command_runner.go:130] > BuildDate:      2025-10-16T13:23:57Z
	I1017 19:08:36.579118   85117 command_runner.go:130] > GoVersion:      go1.23.4
	I1017 19:08:36.579122   85117 command_runner.go:130] > Compiler:       gc
	I1017 19:08:36.579126   85117 command_runner.go:130] > Platform:       linux/amd64
	I1017 19:08:36.579129   85117 command_runner.go:130] > Linkmode:       dynamic
	I1017 19:08:36.579133   85117 command_runner.go:130] > BuildTags:      
	I1017 19:08:36.579137   85117 command_runner.go:130] >   containers_image_ostree_stub
	I1017 19:08:36.579141   85117 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1017 19:08:36.579144   85117 command_runner.go:130] >   btrfs_noversion
	I1017 19:08:36.579148   85117 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1017 19:08:36.579152   85117 command_runner.go:130] >   libdm_no_deferred_remove
	I1017 19:08:36.579156   85117 command_runner.go:130] >   seccomp
	I1017 19:08:36.579159   85117 command_runner.go:130] > LDFlags:          unknown
	I1017 19:08:36.579162   85117 command_runner.go:130] > SeccompEnabled:   true
	I1017 19:08:36.579166   85117 command_runner.go:130] > AppArmorEnabled:  false
	I1017 19:08:36.581921   85117 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1017 19:08:36.583156   85117 main.go:141] libmachine: (functional-016863) Calling .GetIP
	I1017 19:08:36.586303   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:08:36.586761   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:08:36.586791   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:08:36.587045   85117 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1017 19:08:36.592096   85117 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1017 19:08:36.592194   85117 kubeadm.go:883] updating cluster {Name:functional-016863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.1 ClusterName:functional-016863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 19:08:36.592323   85117 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:08:36.592384   85117 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:08:36.644213   85117 command_runner.go:130] > {
	I1017 19:08:36.644235   85117 command_runner.go:130] >   "images": [
	I1017 19:08:36.644239   85117 command_runner.go:130] >     {
	I1017 19:08:36.644246   85117 command_runner.go:130] >       "id": "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1017 19:08:36.644251   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.644257   85117 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1017 19:08:36.644260   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644265   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.644287   85117 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1017 19:08:36.644298   85117 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1017 19:08:36.644304   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644310   85117 command_runner.go:130] >       "size": "109379124",
	I1017 19:08:36.644319   85117 command_runner.go:130] >       "uid": null,
	I1017 19:08:36.644328   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.644357   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.644368   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.644379   85117 command_runner.go:130] >     },
	I1017 19:08:36.644384   85117 command_runner.go:130] >     {
	I1017 19:08:36.644397   85117 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1017 19:08:36.644403   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.644412   85117 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1017 19:08:36.644418   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644429   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.644441   85117 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1017 19:08:36.644455   85117 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1017 19:08:36.644463   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644489   85117 command_runner.go:130] >       "size": "31470524",
	I1017 19:08:36.644500   85117 command_runner.go:130] >       "uid": null,
	I1017 19:08:36.644506   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.644517   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.644524   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.644532   85117 command_runner.go:130] >     },
	I1017 19:08:36.644537   85117 command_runner.go:130] >     {
	I1017 19:08:36.644546   85117 command_runner.go:130] >       "id": "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1017 19:08:36.644570   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.644577   85117 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1017 19:08:36.644586   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644592   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.644602   85117 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1017 19:08:36.644610   85117 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1017 19:08:36.644616   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644620   85117 command_runner.go:130] >       "size": "76103547",
	I1017 19:08:36.644623   85117 command_runner.go:130] >       "uid": null,
	I1017 19:08:36.644628   85117 command_runner.go:130] >       "username": "nonroot",
	I1017 19:08:36.644634   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.644638   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.644644   85117 command_runner.go:130] >     },
	I1017 19:08:36.644655   85117 command_runner.go:130] >     {
	I1017 19:08:36.644664   85117 command_runner.go:130] >       "id": "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1017 19:08:36.644668   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.644675   85117 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1017 19:08:36.644678   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644685   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.644692   85117 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1017 19:08:36.644707   85117 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1017 19:08:36.644713   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644716   85117 command_runner.go:130] >       "size": "195976448",
	I1017 19:08:36.644720   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.644726   85117 command_runner.go:130] >         "value": "0"
	I1017 19:08:36.644729   85117 command_runner.go:130] >       },
	I1017 19:08:36.644733   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.644737   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.644741   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.644744   85117 command_runner.go:130] >     },
	I1017 19:08:36.644747   85117 command_runner.go:130] >     {
	I1017 19:08:36.644753   85117 command_runner.go:130] >       "id": "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1017 19:08:36.644760   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.644764   85117 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1017 19:08:36.644767   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644772   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.644781   85117 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1017 19:08:36.644788   85117 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1017 19:08:36.644794   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644798   85117 command_runner.go:130] >       "size": "89046001",
	I1017 19:08:36.644802   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.644806   85117 command_runner.go:130] >         "value": "0"
	I1017 19:08:36.644810   85117 command_runner.go:130] >       },
	I1017 19:08:36.644813   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.644819   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.644822   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.644830   85117 command_runner.go:130] >     },
	I1017 19:08:36.644836   85117 command_runner.go:130] >     {
	I1017 19:08:36.644842   85117 command_runner.go:130] >       "id": "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1017 19:08:36.644845   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.644850   85117 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1017 19:08:36.644856   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644860   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.644868   85117 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1017 19:08:36.644877   85117 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1017 19:08:36.644880   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644884   85117 command_runner.go:130] >       "size": "76004181",
	I1017 19:08:36.644888   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.644892   85117 command_runner.go:130] >         "value": "0"
	I1017 19:08:36.644895   85117 command_runner.go:130] >       },
	I1017 19:08:36.644899   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.644902   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.644908   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.644911   85117 command_runner.go:130] >     },
	I1017 19:08:36.644914   85117 command_runner.go:130] >     {
	I1017 19:08:36.644920   85117 command_runner.go:130] >       "id": "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1017 19:08:36.644924   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.644928   85117 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1017 19:08:36.644932   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644944   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.644951   85117 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1017 19:08:36.644958   85117 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1017 19:08:36.644961   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644964   85117 command_runner.go:130] >       "size": "73138073",
	I1017 19:08:36.644968   85117 command_runner.go:130] >       "uid": null,
	I1017 19:08:36.644972   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.644975   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.644979   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.644982   85117 command_runner.go:130] >     },
	I1017 19:08:36.644991   85117 command_runner.go:130] >     {
	I1017 19:08:36.644999   85117 command_runner.go:130] >       "id": "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1017 19:08:36.645003   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.645010   85117 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1017 19:08:36.645013   85117 command_runner.go:130] >       ],
	I1017 19:08:36.645017   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.645041   85117 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1017 19:08:36.645052   85117 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1017 19:08:36.645055   85117 command_runner.go:130] >       ],
	I1017 19:08:36.645059   85117 command_runner.go:130] >       "size": "53844823",
	I1017 19:08:36.645062   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.645066   85117 command_runner.go:130] >         "value": "0"
	I1017 19:08:36.645068   85117 command_runner.go:130] >       },
	I1017 19:08:36.645072   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.645075   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.645079   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.645081   85117 command_runner.go:130] >     },
	I1017 19:08:36.645084   85117 command_runner.go:130] >     {
	I1017 19:08:36.645090   85117 command_runner.go:130] >       "id": "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1017 19:08:36.645093   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.645097   85117 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1017 19:08:36.645100   85117 command_runner.go:130] >       ],
	I1017 19:08:36.645104   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.645110   85117 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1017 19:08:36.645116   85117 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1017 19:08:36.645120   85117 command_runner.go:130] >       ],
	I1017 19:08:36.645123   85117 command_runner.go:130] >       "size": "742092",
	I1017 19:08:36.645126   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.645129   85117 command_runner.go:130] >         "value": "65535"
	I1017 19:08:36.645132   85117 command_runner.go:130] >       },
	I1017 19:08:36.645136   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.645143   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.645147   85117 command_runner.go:130] >       "pinned": true
	I1017 19:08:36.645154   85117 command_runner.go:130] >     }
	I1017 19:08:36.645157   85117 command_runner.go:130] >   ]
	I1017 19:08:36.645160   85117 command_runner.go:130] > }
	I1017 19:08:36.645398   85117 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:08:36.645415   85117 crio.go:433] Images already preloaded, skipping extraction
	I1017 19:08:36.645478   85117 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:08:36.684800   85117 command_runner.go:130] > {
	I1017 19:08:36.684832   85117 command_runner.go:130] >   "images": [
	I1017 19:08:36.684855   85117 command_runner.go:130] >     {
	I1017 19:08:36.684869   85117 command_runner.go:130] >       "id": "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1017 19:08:36.684877   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.684887   85117 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1017 19:08:36.684892   85117 command_runner.go:130] >       ],
	I1017 19:08:36.684896   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.684909   85117 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1017 19:08:36.684916   85117 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1017 19:08:36.684919   85117 command_runner.go:130] >       ],
	I1017 19:08:36.684923   85117 command_runner.go:130] >       "size": "109379124",
	I1017 19:08:36.684927   85117 command_runner.go:130] >       "uid": null,
	I1017 19:08:36.684930   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.684935   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.684938   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.684942   85117 command_runner.go:130] >     },
	I1017 19:08:36.684945   85117 command_runner.go:130] >     {
	I1017 19:08:36.684950   85117 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1017 19:08:36.684955   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.684960   85117 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1017 19:08:36.684973   85117 command_runner.go:130] >       ],
	I1017 19:08:36.684980   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.684994   85117 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1017 19:08:36.685002   85117 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1017 19:08:36.685005   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685013   85117 command_runner.go:130] >       "size": "31470524",
	I1017 19:08:36.685018   85117 command_runner.go:130] >       "uid": null,
	I1017 19:08:36.685021   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.685025   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.685029   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.685032   85117 command_runner.go:130] >     },
	I1017 19:08:36.685035   85117 command_runner.go:130] >     {
	I1017 19:08:36.685041   85117 command_runner.go:130] >       "id": "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1017 19:08:36.685045   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.685055   85117 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1017 19:08:36.685061   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685064   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.685072   85117 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1017 19:08:36.685081   85117 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1017 19:08:36.685084   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685088   85117 command_runner.go:130] >       "size": "76103547",
	I1017 19:08:36.685092   85117 command_runner.go:130] >       "uid": null,
	I1017 19:08:36.685095   85117 command_runner.go:130] >       "username": "nonroot",
	I1017 19:08:36.685098   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.685105   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.685108   85117 command_runner.go:130] >     },
	I1017 19:08:36.685111   85117 command_runner.go:130] >     {
	I1017 19:08:36.685116   85117 command_runner.go:130] >       "id": "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1017 19:08:36.685121   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.685125   85117 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1017 19:08:36.685128   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685132   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.685140   85117 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1017 19:08:36.685152   85117 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1017 19:08:36.685158   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685162   85117 command_runner.go:130] >       "size": "195976448",
	I1017 19:08:36.685165   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.685169   85117 command_runner.go:130] >         "value": "0"
	I1017 19:08:36.685172   85117 command_runner.go:130] >       },
	I1017 19:08:36.685176   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.685179   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.685183   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.685186   85117 command_runner.go:130] >     },
	I1017 19:08:36.685195   85117 command_runner.go:130] >     {
	I1017 19:08:36.685202   85117 command_runner.go:130] >       "id": "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1017 19:08:36.685205   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.685209   85117 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1017 19:08:36.685217   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685224   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.685230   85117 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1017 19:08:36.685243   85117 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1017 19:08:36.685249   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685252   85117 command_runner.go:130] >       "size": "89046001",
	I1017 19:08:36.685256   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.685259   85117 command_runner.go:130] >         "value": "0"
	I1017 19:08:36.685263   85117 command_runner.go:130] >       },
	I1017 19:08:36.685266   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.685270   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.685274   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.685277   85117 command_runner.go:130] >     },
	I1017 19:08:36.685280   85117 command_runner.go:130] >     {
	I1017 19:08:36.685292   85117 command_runner.go:130] >       "id": "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1017 19:08:36.685301   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.685310   85117 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1017 19:08:36.685322   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685332   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.685344   85117 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1017 19:08:36.685361   85117 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1017 19:08:36.685371   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685378   85117 command_runner.go:130] >       "size": "76004181",
	I1017 19:08:36.685388   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.685394   85117 command_runner.go:130] >         "value": "0"
	I1017 19:08:36.685403   85117 command_runner.go:130] >       },
	I1017 19:08:36.685407   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.685414   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.685418   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.685421   85117 command_runner.go:130] >     },
	I1017 19:08:36.685424   85117 command_runner.go:130] >     {
	I1017 19:08:36.685430   85117 command_runner.go:130] >       "id": "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1017 19:08:36.685437   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.685448   85117 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1017 19:08:36.685454   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685457   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.685464   85117 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1017 19:08:36.685473   85117 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1017 19:08:36.685476   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685483   85117 command_runner.go:130] >       "size": "73138073",
	I1017 19:08:36.685487   85117 command_runner.go:130] >       "uid": null,
	I1017 19:08:36.685491   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.685495   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.685498   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.685502   85117 command_runner.go:130] >     },
	I1017 19:08:36.685505   85117 command_runner.go:130] >     {
	I1017 19:08:36.685511   85117 command_runner.go:130] >       "id": "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1017 19:08:36.685517   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.685522   85117 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1017 19:08:36.685528   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685531   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.685577   85117 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1017 19:08:36.685591   85117 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1017 19:08:36.685594   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685598   85117 command_runner.go:130] >       "size": "53844823",
	I1017 19:08:36.685601   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.685604   85117 command_runner.go:130] >         "value": "0"
	I1017 19:08:36.685607   85117 command_runner.go:130] >       },
	I1017 19:08:36.685611   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.685614   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.685618   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.685621   85117 command_runner.go:130] >     },
	I1017 19:08:36.685624   85117 command_runner.go:130] >     {
	I1017 19:08:36.685629   85117 command_runner.go:130] >       "id": "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1017 19:08:36.685638   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.685642   85117 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1017 19:08:36.685651   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685658   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.685664   85117 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1017 19:08:36.685673   85117 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1017 19:08:36.685677   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685680   85117 command_runner.go:130] >       "size": "742092",
	I1017 19:08:36.685684   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.685688   85117 command_runner.go:130] >         "value": "65535"
	I1017 19:08:36.685691   85117 command_runner.go:130] >       },
	I1017 19:08:36.685697   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.685700   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.685703   85117 command_runner.go:130] >       "pinned": true
	I1017 19:08:36.685706   85117 command_runner.go:130] >     }
	I1017 19:08:36.685711   85117 command_runner.go:130] >   ]
	I1017 19:08:36.685714   85117 command_runner.go:130] > }
	I1017 19:08:36.685822   85117 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:08:36.685834   85117 cache_images.go:85] Images are preloaded, skipping loading
	I1017 19:08:36.685842   85117 kubeadm.go:934] updating node { 192.168.39.205 8441 v1.34.1 crio true true} ...
	I1017 19:08:36.685955   85117 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-016863 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.205
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-016863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:08:36.686028   85117 ssh_runner.go:195] Run: crio config
	I1017 19:08:36.721698   85117 command_runner.go:130] ! time="2025-10-17 19:08:36.711815300Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1017 19:08:36.726934   85117 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1017 19:08:36.733071   85117 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1017 19:08:36.733099   85117 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1017 19:08:36.733109   85117 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1017 19:08:36.733113   85117 command_runner.go:130] > #
	I1017 19:08:36.733123   85117 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1017 19:08:36.733131   85117 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1017 19:08:36.733140   85117 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1017 19:08:36.733156   85117 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1017 19:08:36.733165   85117 command_runner.go:130] > # reload'.
	I1017 19:08:36.733177   85117 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1017 19:08:36.733189   85117 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1017 19:08:36.733199   85117 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1017 19:08:36.733209   85117 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1017 19:08:36.733222   85117 command_runner.go:130] > [crio]
	I1017 19:08:36.733230   85117 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1017 19:08:36.733234   85117 command_runner.go:130] > # containers images, in this directory.
	I1017 19:08:36.733241   85117 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1017 19:08:36.733256   85117 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1017 19:08:36.733263   85117 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1017 19:08:36.733270   85117 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1017 19:08:36.733277   85117 command_runner.go:130] > # imagestore = ""
	I1017 19:08:36.733283   85117 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1017 19:08:36.733291   85117 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1017 19:08:36.733296   85117 command_runner.go:130] > # storage_driver = "overlay"
	I1017 19:08:36.733307   85117 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1017 19:08:36.733320   85117 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1017 19:08:36.733327   85117 command_runner.go:130] > storage_option = [
	I1017 19:08:36.733337   85117 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1017 19:08:36.733342   85117 command_runner.go:130] > ]
	I1017 19:08:36.733354   85117 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1017 19:08:36.733363   85117 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1017 19:08:36.733368   85117 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1017 19:08:36.733374   85117 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1017 19:08:36.733380   85117 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1017 19:08:36.733387   85117 command_runner.go:130] > # always happen on a node reboot
	I1017 19:08:36.733391   85117 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1017 19:08:36.733411   85117 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1017 19:08:36.733424   85117 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1017 19:08:36.733432   85117 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1017 19:08:36.733443   85117 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1017 19:08:36.733456   85117 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1017 19:08:36.733470   85117 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1017 19:08:36.733480   85117 command_runner.go:130] > # internal_wipe = true
	I1017 19:08:36.733489   85117 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1017 19:08:36.733497   85117 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1017 19:08:36.733504   85117 command_runner.go:130] > # internal_repair = false
	I1017 19:08:36.733522   85117 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1017 19:08:36.733534   85117 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1017 19:08:36.733544   85117 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1017 19:08:36.733565   85117 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1017 19:08:36.733582   85117 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1017 19:08:36.733590   85117 command_runner.go:130] > [crio.api]
	I1017 19:08:36.733598   85117 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1017 19:08:36.733608   85117 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1017 19:08:36.733616   85117 command_runner.go:130] > # IP address on which the stream server will listen.
	I1017 19:08:36.733626   85117 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1017 19:08:36.733636   85117 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1017 19:08:36.733647   85117 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1017 19:08:36.733653   85117 command_runner.go:130] > # stream_port = "0"
	I1017 19:08:36.733665   85117 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1017 19:08:36.733671   85117 command_runner.go:130] > # stream_enable_tls = false
	I1017 19:08:36.733683   85117 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1017 19:08:36.733692   85117 command_runner.go:130] > # stream_idle_timeout = ""
	I1017 19:08:36.733699   85117 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1017 19:08:36.733709   85117 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1017 19:08:36.733719   85117 command_runner.go:130] > # minutes.
	I1017 19:08:36.733729   85117 command_runner.go:130] > # stream_tls_cert = ""
	I1017 19:08:36.733738   85117 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1017 19:08:36.733749   85117 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1017 19:08:36.733755   85117 command_runner.go:130] > # stream_tls_key = ""
	I1017 19:08:36.733767   85117 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1017 19:08:36.733777   85117 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1017 19:08:36.733807   85117 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1017 19:08:36.733817   85117 command_runner.go:130] > # stream_tls_ca = ""
	I1017 19:08:36.733828   85117 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1017 19:08:36.733839   85117 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1017 19:08:36.733850   85117 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1017 19:08:36.733860   85117 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1017 19:08:36.733870   85117 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1017 19:08:36.733888   85117 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1017 19:08:36.733894   85117 command_runner.go:130] > [crio.runtime]
	I1017 19:08:36.733902   85117 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1017 19:08:36.733914   85117 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1017 19:08:36.733923   85117 command_runner.go:130] > # "nofile=1024:2048"
	I1017 19:08:36.733936   85117 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1017 19:08:36.733945   85117 command_runner.go:130] > # default_ulimits = [
	I1017 19:08:36.733950   85117 command_runner.go:130] > # ]
	I1017 19:08:36.733961   85117 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1017 19:08:36.733966   85117 command_runner.go:130] > # no_pivot = false
	I1017 19:08:36.733974   85117 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1017 19:08:36.733984   85117 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1017 19:08:36.733990   85117 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1017 19:08:36.734005   85117 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1017 19:08:36.734017   85117 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1017 19:08:36.734041   85117 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1017 19:08:36.734050   85117 command_runner.go:130] > conmon = "/usr/bin/conmon"
	I1017 19:08:36.734057   85117 command_runner.go:130] > # Cgroup setting for conmon
	I1017 19:08:36.734070   85117 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1017 19:08:36.734079   85117 command_runner.go:130] > conmon_cgroup = "pod"
	I1017 19:08:36.734085   85117 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1017 19:08:36.734096   85117 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1017 19:08:36.734105   85117 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1017 19:08:36.734115   85117 command_runner.go:130] > conmon_env = [
	I1017 19:08:36.734124   85117 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1017 19:08:36.734133   85117 command_runner.go:130] > ]
	I1017 19:08:36.734142   85117 command_runner.go:130] > # Additional environment variables to set for all the
	I1017 19:08:36.734152   85117 command_runner.go:130] > # containers. These are overridden if set in the
	I1017 19:08:36.734161   85117 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1017 19:08:36.734170   85117 command_runner.go:130] > # default_env = [
	I1017 19:08:36.734175   85117 command_runner.go:130] > # ]
	I1017 19:08:36.734186   85117 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1017 19:08:36.734193   85117 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1017 19:08:36.734374   85117 command_runner.go:130] > # selinux = false
	I1017 19:08:36.734484   85117 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1017 19:08:36.734495   85117 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1017 19:08:36.734505   85117 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1017 19:08:36.734516   85117 command_runner.go:130] > # seccomp_profile = ""
	I1017 19:08:36.734531   85117 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1017 19:08:36.734543   85117 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1017 19:08:36.734567   85117 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1017 19:08:36.734585   85117 command_runner.go:130] > # which might increase security.
	I1017 19:08:36.734593   85117 command_runner.go:130] > # This option is currently deprecated,
	I1017 19:08:36.734610   85117 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1017 19:08:36.734624   85117 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1017 19:08:36.734634   85117 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1017 19:08:36.734646   85117 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1017 19:08:36.734697   85117 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1017 19:08:36.735591   85117 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1017 19:08:36.735609   85117 command_runner.go:130] > # This option supports live configuration reload.
	I1017 19:08:36.735623   85117 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1017 19:08:36.735636   85117 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1017 19:08:36.735643   85117 command_runner.go:130] > # the cgroup blockio controller.
	I1017 19:08:36.735656   85117 command_runner.go:130] > # blockio_config_file = ""
	I1017 19:08:36.735670   85117 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1017 19:08:36.735675   85117 command_runner.go:130] > # blockio parameters.
	I1017 19:08:36.735681   85117 command_runner.go:130] > # blockio_reload = false
	I1017 19:08:36.735706   85117 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1017 19:08:36.735733   85117 command_runner.go:130] > # irqbalance daemon.
	I1017 19:08:36.735812   85117 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1017 19:08:36.735833   85117 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1017 19:08:36.736170   85117 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1017 19:08:36.736193   85117 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1017 19:08:36.736203   85117 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1017 19:08:36.736229   85117 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1017 19:08:36.736240   85117 command_runner.go:130] > # This option supports live configuration reload.
	I1017 19:08:36.736246   85117 command_runner.go:130] > # rdt_config_file = ""
	I1017 19:08:36.736258   85117 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1017 19:08:36.736268   85117 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1017 19:08:36.736300   85117 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1017 19:08:36.736312   85117 command_runner.go:130] > # separate_pull_cgroup = ""
	I1017 19:08:36.736321   85117 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1017 19:08:36.736329   85117 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1017 19:08:36.736335   85117 command_runner.go:130] > # will be added.
	I1017 19:08:36.736341   85117 command_runner.go:130] > # default_capabilities = [
	I1017 19:08:36.736349   85117 command_runner.go:130] > # 	"CHOWN",
	I1017 19:08:36.736355   85117 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1017 19:08:36.736360   85117 command_runner.go:130] > # 	"FSETID",
	I1017 19:08:36.736366   85117 command_runner.go:130] > # 	"FOWNER",
	I1017 19:08:36.736374   85117 command_runner.go:130] > # 	"SETGID",
	I1017 19:08:36.736379   85117 command_runner.go:130] > # 	"SETUID",
	I1017 19:08:36.736384   85117 command_runner.go:130] > # 	"SETPCAP",
	I1017 19:08:36.736392   85117 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1017 19:08:36.736401   85117 command_runner.go:130] > # 	"KILL",
	I1017 19:08:36.736409   85117 command_runner.go:130] > # ]
	I1017 19:08:36.736420   85117 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1017 19:08:36.736433   85117 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1017 19:08:36.736444   85117 command_runner.go:130] > # add_inheritable_capabilities = false
	I1017 19:08:36.736452   85117 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1017 19:08:36.736463   85117 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1017 19:08:36.736472   85117 command_runner.go:130] > default_sysctls = [
	I1017 19:08:36.736482   85117 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1017 19:08:36.736490   85117 command_runner.go:130] > ]
	I1017 19:08:36.736501   85117 command_runner.go:130] > # List of devices on the host that a
	I1017 19:08:36.736513   85117 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1017 19:08:36.736521   85117 command_runner.go:130] > # allowed_devices = [
	I1017 19:08:36.736526   85117 command_runner.go:130] > # 	"/dev/fuse",
	I1017 19:08:36.736534   85117 command_runner.go:130] > # ]
	I1017 19:08:36.736541   85117 command_runner.go:130] > # List of additional devices. specified as
	I1017 19:08:36.736569   85117 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1017 19:08:36.736580   85117 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1017 19:08:36.736589   85117 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1017 19:08:36.736598   85117 command_runner.go:130] > # additional_devices = [
	I1017 19:08:36.736602   85117 command_runner.go:130] > # ]
	I1017 19:08:36.736612   85117 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1017 19:08:36.736621   85117 command_runner.go:130] > # cdi_spec_dirs = [
	I1017 19:08:36.736627   85117 command_runner.go:130] > # 	"/etc/cdi",
	I1017 19:08:36.736635   85117 command_runner.go:130] > # 	"/var/run/cdi",
	I1017 19:08:36.736640   85117 command_runner.go:130] > # ]
	I1017 19:08:36.736652   85117 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1017 19:08:36.736664   85117 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1017 19:08:36.736673   85117 command_runner.go:130] > # Defaults to false.
	I1017 19:08:36.736684   85117 command_runner.go:130] > # device_ownership_from_security_context = false
	I1017 19:08:36.736696   85117 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1017 19:08:36.736707   85117 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1017 19:08:36.736715   85117 command_runner.go:130] > # hooks_dir = [
	I1017 19:08:36.736723   85117 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1017 19:08:36.736732   85117 command_runner.go:130] > # ]
	I1017 19:08:36.736744   85117 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1017 19:08:36.736756   85117 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1017 19:08:36.736767   85117 command_runner.go:130] > # its default mounts from the following two files:
	I1017 19:08:36.736774   85117 command_runner.go:130] > #
	I1017 19:08:36.736783   85117 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1017 19:08:36.736795   85117 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1017 19:08:36.736809   85117 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1017 19:08:36.736817   85117 command_runner.go:130] > #
	I1017 19:08:36.736826   85117 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1017 19:08:36.736838   85117 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1017 19:08:36.736850   85117 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1017 19:08:36.736858   85117 command_runner.go:130] > #      only add mounts it finds in this file.
	I1017 19:08:36.736865   85117 command_runner.go:130] > #
	I1017 19:08:36.736871   85117 command_runner.go:130] > # default_mounts_file = ""
	I1017 19:08:36.736882   85117 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1017 19:08:36.736894   85117 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1017 19:08:36.736914   85117 command_runner.go:130] > pids_limit = 1024
	I1017 19:08:36.736938   85117 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1017 19:08:36.736957   85117 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1017 19:08:36.736976   85117 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1017 19:08:36.737004   85117 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1017 19:08:36.737015   85117 command_runner.go:130] > # log_size_max = -1
	I1017 19:08:36.737028   85117 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1017 19:08:36.737037   85117 command_runner.go:130] > # log_to_journald = false
	I1017 19:08:36.737051   85117 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1017 19:08:36.737062   85117 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1017 19:08:36.737073   85117 command_runner.go:130] > # Path to directory for container attach sockets.
	I1017 19:08:36.737084   85117 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1017 19:08:36.737094   85117 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1017 19:08:36.737102   85117 command_runner.go:130] > # bind_mount_prefix = ""
	I1017 19:08:36.737107   85117 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1017 19:08:36.737113   85117 command_runner.go:130] > # read_only = false
	I1017 19:08:36.737122   85117 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1017 19:08:36.737131   85117 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1017 19:08:36.737137   85117 command_runner.go:130] > # live configuration reload.
	I1017 19:08:36.737141   85117 command_runner.go:130] > # log_level = "info"
	I1017 19:08:36.737149   85117 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1017 19:08:36.737153   85117 command_runner.go:130] > # This option supports live configuration reload.
	I1017 19:08:36.737159   85117 command_runner.go:130] > # log_filter = ""
	I1017 19:08:36.737165   85117 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1017 19:08:36.737175   85117 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1017 19:08:36.737181   85117 command_runner.go:130] > # separated by comma.
	I1017 19:08:36.737189   85117 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1017 19:08:36.737199   85117 command_runner.go:130] > # uid_mappings = ""
	I1017 19:08:36.737214   85117 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1017 19:08:36.737222   85117 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1017 19:08:36.737227   85117 command_runner.go:130] > # separated by comma.
	I1017 19:08:36.737234   85117 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1017 19:08:36.737238   85117 command_runner.go:130] > # gid_mappings = ""
	I1017 19:08:36.737244   85117 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1017 19:08:36.737252   85117 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1017 19:08:36.737258   85117 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1017 19:08:36.737268   85117 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1017 19:08:36.737274   85117 command_runner.go:130] > # minimum_mappable_uid = -1
	I1017 19:08:36.737280   85117 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1017 19:08:36.737285   85117 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1017 19:08:36.737293   85117 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1017 19:08:36.737301   85117 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1017 19:08:36.737306   85117 command_runner.go:130] > # minimum_mappable_gid = -1
	I1017 19:08:36.737312   85117 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1017 19:08:36.737318   85117 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1017 19:08:36.737326   85117 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1017 19:08:36.737330   85117 command_runner.go:130] > # ctr_stop_timeout = 30
	I1017 19:08:36.737335   85117 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1017 19:08:36.737343   85117 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1017 19:08:36.737349   85117 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1017 19:08:36.737354   85117 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1017 19:08:36.737360   85117 command_runner.go:130] > drop_infra_ctr = false
	I1017 19:08:36.737365   85117 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1017 19:08:36.737370   85117 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1017 19:08:36.737377   85117 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1017 19:08:36.737382   85117 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1017 19:08:36.737388   85117 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1017 19:08:36.737396   85117 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1017 19:08:36.737402   85117 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1017 19:08:36.737409   85117 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1017 19:08:36.737412   85117 command_runner.go:130] > # shared_cpuset = ""
	I1017 19:08:36.737421   85117 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1017 19:08:36.737428   85117 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1017 19:08:36.737434   85117 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1017 19:08:36.737441   85117 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1017 19:08:36.737447   85117 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1017 19:08:36.737452   85117 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1017 19:08:36.737460   85117 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1017 19:08:36.737464   85117 command_runner.go:130] > # enable_criu_support = false
	I1017 19:08:36.737471   85117 command_runner.go:130] > # Enable/disable the generation of the container,
	I1017 19:08:36.737477   85117 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1017 19:08:36.737484   85117 command_runner.go:130] > # enable_pod_events = false
	I1017 19:08:36.737490   85117 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1017 19:08:36.737499   85117 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1017 19:08:36.737507   85117 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1017 19:08:36.737510   85117 command_runner.go:130] > # default_runtime = "runc"
	I1017 19:08:36.737518   85117 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1017 19:08:36.737525   85117 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1017 19:08:36.737537   85117 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1017 19:08:36.737545   85117 command_runner.go:130] > # creation as a file is not desired either.
	I1017 19:08:36.737567   85117 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1017 19:08:36.737578   85117 command_runner.go:130] > # the hostname is being managed dynamically.
	I1017 19:08:36.737585   85117 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1017 19:08:36.737590   85117 command_runner.go:130] > # ]
	I1017 19:08:36.737597   85117 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1017 19:08:36.737605   85117 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1017 19:08:36.737613   85117 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1017 19:08:36.737618   85117 command_runner.go:130] > # Each entry in the table should follow the format:
	I1017 19:08:36.737623   85117 command_runner.go:130] > #
	I1017 19:08:36.737628   85117 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1017 19:08:36.737635   85117 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1017 19:08:36.737639   85117 command_runner.go:130] > # runtime_type = "oci"
	I1017 19:08:36.737698   85117 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1017 19:08:36.737709   85117 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1017 19:08:36.737719   85117 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1017 19:08:36.737725   85117 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1017 19:08:36.737735   85117 command_runner.go:130] > # monitor_env = []
	I1017 19:08:36.737744   85117 command_runner.go:130] > # privileged_without_host_devices = false
	I1017 19:08:36.737748   85117 command_runner.go:130] > # allowed_annotations = []
	I1017 19:08:36.737754   85117 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1017 19:08:36.737763   85117 command_runner.go:130] > # Where:
	I1017 19:08:36.737771   85117 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1017 19:08:36.737778   85117 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1017 19:08:36.737786   85117 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1017 19:08:36.737794   85117 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1017 19:08:36.737798   85117 command_runner.go:130] > #   in $PATH.
	I1017 19:08:36.737803   85117 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1017 19:08:36.737810   85117 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1017 19:08:36.737816   85117 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1017 19:08:36.737821   85117 command_runner.go:130] > #   state.
	I1017 19:08:36.737828   85117 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1017 19:08:36.737836   85117 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1017 19:08:36.737842   85117 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1017 19:08:36.737849   85117 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1017 19:08:36.737856   85117 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1017 19:08:36.737865   85117 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1017 19:08:36.737872   85117 command_runner.go:130] > #   The currently recognized values are:
	I1017 19:08:36.737878   85117 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1017 19:08:36.737892   85117 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1017 19:08:36.737900   85117 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1017 19:08:36.737906   85117 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1017 19:08:36.737916   85117 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1017 19:08:36.737925   85117 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1017 19:08:36.737935   85117 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1017 19:08:36.737943   85117 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1017 19:08:36.737951   85117 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1017 19:08:36.737958   85117 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1017 19:08:36.737966   85117 command_runner.go:130] > #   deprecated option "conmon".
	I1017 19:08:36.737973   85117 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1017 19:08:36.737981   85117 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1017 19:08:36.737987   85117 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1017 19:08:36.737995   85117 command_runner.go:130] > #   should be moved to the container's cgroup
	I1017 19:08:36.738001   85117 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1017 19:08:36.738010   85117 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1017 19:08:36.738019   85117 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1017 19:08:36.738027   85117 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1017 19:08:36.738030   85117 command_runner.go:130] > #
	I1017 19:08:36.738038   85117 command_runner.go:130] > # Using the seccomp notifier feature:
	I1017 19:08:36.738041   85117 command_runner.go:130] > #
	I1017 19:08:36.738046   85117 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1017 19:08:36.738055   85117 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1017 19:08:36.738060   85117 command_runner.go:130] > #
	I1017 19:08:36.738067   85117 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1017 19:08:36.738075   85117 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1017 19:08:36.738080   85117 command_runner.go:130] > #
	I1017 19:08:36.738086   85117 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1017 19:08:36.738090   85117 command_runner.go:130] > # feature.
	I1017 19:08:36.738092   85117 command_runner.go:130] > #
	I1017 19:08:36.738100   85117 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1017 19:08:36.738108   85117 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1017 19:08:36.738114   85117 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1017 19:08:36.738123   85117 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1017 19:08:36.738132   85117 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1017 19:08:36.738137   85117 command_runner.go:130] > #
	I1017 19:08:36.738143   85117 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1017 19:08:36.738151   85117 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1017 19:08:36.738156   85117 command_runner.go:130] > #
	I1017 19:08:36.738162   85117 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1017 19:08:36.738169   85117 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1017 19:08:36.738172   85117 command_runner.go:130] > #
	I1017 19:08:36.738178   85117 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1017 19:08:36.738186   85117 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1017 19:08:36.738190   85117 command_runner.go:130] > # limitation.
	I1017 19:08:36.738198   85117 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1017 19:08:36.738202   85117 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1017 19:08:36.738212   85117 command_runner.go:130] > runtime_type = "oci"
	I1017 19:08:36.738218   85117 command_runner.go:130] > runtime_root = "/run/runc"
	I1017 19:08:36.738222   85117 command_runner.go:130] > runtime_config_path = ""
	I1017 19:08:36.738228   85117 command_runner.go:130] > monitor_path = "/usr/bin/conmon"
	I1017 19:08:36.738233   85117 command_runner.go:130] > monitor_cgroup = "pod"
	I1017 19:08:36.738239   85117 command_runner.go:130] > monitor_exec_cgroup = ""
	I1017 19:08:36.738242   85117 command_runner.go:130] > monitor_env = [
	I1017 19:08:36.738250   85117 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1017 19:08:36.738253   85117 command_runner.go:130] > ]
	I1017 19:08:36.738258   85117 command_runner.go:130] > privileged_without_host_devices = false
	I1017 19:08:36.738270   85117 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1017 19:08:36.738277   85117 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1017 19:08:36.738283   85117 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1017 19:08:36.738302   85117 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1017 19:08:36.738315   85117 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1017 19:08:36.738320   85117 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1017 19:08:36.738331   85117 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1017 19:08:36.738339   85117 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1017 19:08:36.738347   85117 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1017 19:08:36.738354   85117 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1017 19:08:36.738359   85117 command_runner.go:130] > # Example:
	I1017 19:08:36.738364   85117 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1017 19:08:36.738368   85117 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1017 19:08:36.738373   85117 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1017 19:08:36.738378   85117 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1017 19:08:36.738381   85117 command_runner.go:130] > # cpuset = 0
	I1017 19:08:36.738384   85117 command_runner.go:130] > # cpushares = "0-1"
	I1017 19:08:36.738388   85117 command_runner.go:130] > # Where:
	I1017 19:08:36.738392   85117 command_runner.go:130] > # The workload name is workload-type.
	I1017 19:08:36.738399   85117 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1017 19:08:36.738406   85117 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1017 19:08:36.738411   85117 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1017 19:08:36.738419   85117 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1017 19:08:36.738427   85117 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1017 19:08:36.738431   85117 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1017 19:08:36.738437   85117 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1017 19:08:36.738443   85117 command_runner.go:130] > # Default value is set to true
	I1017 19:08:36.738447   85117 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1017 19:08:36.738454   85117 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1017 19:08:36.738459   85117 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1017 19:08:36.738465   85117 command_runner.go:130] > # Default value is set to 'false'
	I1017 19:08:36.738470   85117 command_runner.go:130] > # disable_hostport_mapping = false
	I1017 19:08:36.738478   85117 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1017 19:08:36.738484   85117 command_runner.go:130] > #
	I1017 19:08:36.738489   85117 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1017 19:08:36.738500   85117 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1017 19:08:36.738508   85117 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1017 19:08:36.738517   85117 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1017 19:08:36.738522   85117 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1017 19:08:36.738529   85117 command_runner.go:130] > [crio.image]
	I1017 19:08:36.738535   85117 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1017 19:08:36.738541   85117 command_runner.go:130] > # default_transport = "docker://"
	I1017 19:08:36.738547   85117 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1017 19:08:36.738573   85117 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1017 19:08:36.738580   85117 command_runner.go:130] > # global_auth_file = ""
	I1017 19:08:36.738589   85117 command_runner.go:130] > # The image used to instantiate infra containers.
	I1017 19:08:36.738594   85117 command_runner.go:130] > # This option supports live configuration reload.
	I1017 19:08:36.738601   85117 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10.1"
	I1017 19:08:36.738608   85117 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1017 19:08:36.738616   85117 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1017 19:08:36.738622   85117 command_runner.go:130] > # This option supports live configuration reload.
	I1017 19:08:36.738626   85117 command_runner.go:130] > # pause_image_auth_file = ""
	I1017 19:08:36.738634   85117 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1017 19:08:36.738642   85117 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1017 19:08:36.738648   85117 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1017 19:08:36.738656   85117 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1017 19:08:36.738660   85117 command_runner.go:130] > # pause_command = "/pause"
	I1017 19:08:36.738668   85117 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1017 19:08:36.738674   85117 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1017 19:08:36.738690   85117 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1017 19:08:36.738700   85117 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1017 19:08:36.738709   85117 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1017 19:08:36.738718   85117 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1017 19:08:36.738722   85117 command_runner.go:130] > # pinned_images = [
	I1017 19:08:36.738727   85117 command_runner.go:130] > # ]
	I1017 19:08:36.738734   85117 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1017 19:08:36.738742   85117 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1017 19:08:36.738748   85117 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1017 19:08:36.738756   85117 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1017 19:08:36.738762   85117 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1017 19:08:36.738768   85117 command_runner.go:130] > # signature_policy = ""
	I1017 19:08:36.738773   85117 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1017 19:08:36.738781   85117 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1017 19:08:36.738787   85117 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1017 19:08:36.738792   85117 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1017 19:08:36.738798   85117 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1017 19:08:36.738802   85117 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1017 19:08:36.738808   85117 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1017 19:08:36.738813   85117 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1017 19:08:36.738817   85117 command_runner.go:130] > # changing them here.
	I1017 19:08:36.738820   85117 command_runner.go:130] > # insecure_registries = [
	I1017 19:08:36.738823   85117 command_runner.go:130] > # ]
	I1017 19:08:36.738828   85117 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1017 19:08:36.738833   85117 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1017 19:08:36.738836   85117 command_runner.go:130] > # image_volumes = "mkdir"
	I1017 19:08:36.738841   85117 command_runner.go:130] > # Temporary directory to use for storing big files
	I1017 19:08:36.738845   85117 command_runner.go:130] > # big_files_temporary_dir = ""
	I1017 19:08:36.738850   85117 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1017 19:08:36.738853   85117 command_runner.go:130] > # CNI plugins.
	I1017 19:08:36.738856   85117 command_runner.go:130] > [crio.network]
	I1017 19:08:36.738861   85117 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1017 19:08:36.738869   85117 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1017 19:08:36.738873   85117 command_runner.go:130] > # cni_default_network = ""
	I1017 19:08:36.738880   85117 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1017 19:08:36.738884   85117 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1017 19:08:36.738892   85117 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1017 19:08:36.738895   85117 command_runner.go:130] > # plugin_dirs = [
	I1017 19:08:36.738901   85117 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1017 19:08:36.738904   85117 command_runner.go:130] > # ]
	I1017 19:08:36.738909   85117 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1017 19:08:36.738915   85117 command_runner.go:130] > [crio.metrics]
	I1017 19:08:36.738919   85117 command_runner.go:130] > # Globally enable or disable metrics support.
	I1017 19:08:36.738925   85117 command_runner.go:130] > enable_metrics = true
	I1017 19:08:36.738929   85117 command_runner.go:130] > # Specify enabled metrics collectors.
	I1017 19:08:36.738939   85117 command_runner.go:130] > # Per default all metrics are enabled.
	I1017 19:08:36.738948   85117 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1017 19:08:36.738957   85117 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1017 19:08:36.738966   85117 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1017 19:08:36.738969   85117 command_runner.go:130] > # metrics_collectors = [
	I1017 19:08:36.738975   85117 command_runner.go:130] > # 	"operations",
	I1017 19:08:36.738980   85117 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1017 19:08:36.738988   85117 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1017 19:08:36.738992   85117 command_runner.go:130] > # 	"operations_errors",
	I1017 19:08:36.738998   85117 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1017 19:08:36.739002   85117 command_runner.go:130] > # 	"image_pulls_by_name",
	I1017 19:08:36.739008   85117 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1017 19:08:36.739012   85117 command_runner.go:130] > # 	"image_pulls_failures",
	I1017 19:08:36.739019   85117 command_runner.go:130] > # 	"image_pulls_successes",
	I1017 19:08:36.739022   85117 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1017 19:08:36.739029   85117 command_runner.go:130] > # 	"image_layer_reuse",
	I1017 19:08:36.739033   85117 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1017 19:08:36.739037   85117 command_runner.go:130] > # 	"containers_oom_total",
	I1017 19:08:36.739041   85117 command_runner.go:130] > # 	"containers_oom",
	I1017 19:08:36.739047   85117 command_runner.go:130] > # 	"processes_defunct",
	I1017 19:08:36.739050   85117 command_runner.go:130] > # 	"operations_total",
	I1017 19:08:36.739057   85117 command_runner.go:130] > # 	"operations_latency_seconds",
	I1017 19:08:36.739061   85117 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1017 19:08:36.739068   85117 command_runner.go:130] > # 	"operations_errors_total",
	I1017 19:08:36.739071   85117 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1017 19:08:36.739078   85117 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1017 19:08:36.739082   85117 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1017 19:08:36.739088   85117 command_runner.go:130] > # 	"image_pulls_success_total",
	I1017 19:08:36.739092   85117 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1017 19:08:36.739099   85117 command_runner.go:130] > # 	"containers_oom_count_total",
	I1017 19:08:36.739103   85117 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1017 19:08:36.739110   85117 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1017 19:08:36.739112   85117 command_runner.go:130] > # ]
	I1017 19:08:36.739119   85117 command_runner.go:130] > # The port on which the metrics server will listen.
	I1017 19:08:36.739125   85117 command_runner.go:130] > # metrics_port = 9090
	I1017 19:08:36.739132   85117 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1017 19:08:36.739136   85117 command_runner.go:130] > # metrics_socket = ""
	I1017 19:08:36.739143   85117 command_runner.go:130] > # The certificate for the secure metrics server.
	I1017 19:08:36.739148   85117 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1017 19:08:36.739156   85117 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1017 19:08:36.739161   85117 command_runner.go:130] > # certificate on any modification event.
	I1017 19:08:36.739165   85117 command_runner.go:130] > # metrics_cert = ""
	I1017 19:08:36.739170   85117 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1017 19:08:36.739176   85117 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1017 19:08:36.739180   85117 command_runner.go:130] > # metrics_key = ""
	I1017 19:08:36.739188   85117 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1017 19:08:36.739191   85117 command_runner.go:130] > [crio.tracing]
	I1017 19:08:36.739200   85117 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1017 19:08:36.739203   85117 command_runner.go:130] > # enable_tracing = false
	I1017 19:08:36.739214   85117 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1017 19:08:36.739221   85117 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1017 19:08:36.739227   85117 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1017 19:08:36.739240   85117 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1017 19:08:36.739246   85117 command_runner.go:130] > # CRI-O NRI configuration.
	I1017 19:08:36.739250   85117 command_runner.go:130] > [crio.nri]
	I1017 19:08:36.739254   85117 command_runner.go:130] > # Globally enable or disable NRI.
	I1017 19:08:36.739260   85117 command_runner.go:130] > # enable_nri = false
	I1017 19:08:36.739264   85117 command_runner.go:130] > # NRI socket to listen on.
	I1017 19:08:36.739271   85117 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1017 19:08:36.739275   85117 command_runner.go:130] > # NRI plugin directory to use.
	I1017 19:08:36.739280   85117 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1017 19:08:36.739287   85117 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1017 19:08:36.739291   85117 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1017 19:08:36.739299   85117 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1017 19:08:36.739303   85117 command_runner.go:130] > # nri_disable_connections = false
	I1017 19:08:36.739310   85117 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1017 19:08:36.739315   85117 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1017 19:08:36.739325   85117 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1017 19:08:36.739332   85117 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1017 19:08:36.739337   85117 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1017 19:08:36.739343   85117 command_runner.go:130] > [crio.stats]
	I1017 19:08:36.739348   85117 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1017 19:08:36.739353   85117 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1017 19:08:36.739360   85117 command_runner.go:130] > # stats_collection_period = 0
	I1017 19:08:36.739439   85117 cni.go:84] Creating CNI manager for ""
	I1017 19:08:36.739451   85117 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1017 19:08:36.739480   85117 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 19:08:36.739504   85117 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.205 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-016863 NodeName:functional-016863 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.205"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.205 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 19:08:36.739644   85117 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.205
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-016863"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.205"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.205"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 19:08:36.739707   85117 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:08:36.752377   85117 command_runner.go:130] > kubeadm
	I1017 19:08:36.752404   85117 command_runner.go:130] > kubectl
	I1017 19:08:36.752408   85117 command_runner.go:130] > kubelet
	I1017 19:08:36.752864   85117 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:08:36.752933   85117 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 19:08:36.764722   85117 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1017 19:08:36.786673   85117 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:08:36.808021   85117 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1017 19:08:36.828821   85117 ssh_runner.go:195] Run: grep 192.168.39.205	control-plane.minikube.internal$ /etc/hosts
	I1017 19:08:36.833177   85117 command_runner.go:130] > 192.168.39.205	control-plane.minikube.internal
	I1017 19:08:36.833246   85117 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:08:37.010934   85117 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:08:37.030439   85117 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863 for IP: 192.168.39.205
	I1017 19:08:37.030467   85117 certs.go:195] generating shared ca certs ...
	I1017 19:08:37.030485   85117 certs.go:227] acquiring lock for ca certs: {Name:mka410ab7d3b92eaaa0d0545223807c0ba196baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:08:37.030690   85117 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-75534/.minikube/ca.key
	I1017 19:08:37.030747   85117 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-75534/.minikube/proxy-client-ca.key
	I1017 19:08:37.030762   85117 certs.go:257] generating profile certs ...
	I1017 19:08:37.030878   85117 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/client.key
	I1017 19:08:37.030972   85117 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/apiserver.key.c24585d5
	I1017 19:08:37.031049   85117 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/proxy-client.key
	I1017 19:08:37.031067   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1017 19:08:37.031086   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1017 19:08:37.031102   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1017 19:08:37.031121   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1017 19:08:37.031138   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1017 19:08:37.031155   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1017 19:08:37.031179   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1017 19:08:37.031195   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1017 19:08:37.031270   85117 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/79439.pem (1338 bytes)
	W1017 19:08:37.031314   85117 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-75534/.minikube/certs/79439_empty.pem, impossibly tiny 0 bytes
	I1017 19:08:37.031328   85117 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca-key.pem (1679 bytes)
	I1017 19:08:37.031364   85117 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem (1082 bytes)
	I1017 19:08:37.031395   85117 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:08:37.031426   85117 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/key.pem (1679 bytes)
	I1017 19:08:37.031478   85117 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/ssl/certs/794392.pem (1708 bytes)
	I1017 19:08:37.031518   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/ssl/certs/794392.pem -> /usr/share/ca-certificates/794392.pem
	I1017 19:08:37.031537   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:08:37.031564   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/79439.pem -> /usr/share/ca-certificates/79439.pem
	I1017 19:08:37.032341   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:08:37.064212   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 19:08:37.094935   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:08:37.126973   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 19:08:37.157540   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1017 19:08:37.187168   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 19:08:37.217543   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 19:08:37.247400   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 19:08:37.278758   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/ssl/certs/794392.pem --> /usr/share/ca-certificates/794392.pem (1708 bytes)
	I1017 19:08:37.308088   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:08:37.338377   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/certs/79439.pem --> /usr/share/ca-certificates/79439.pem (1338 bytes)
	I1017 19:08:37.369350   85117 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 19:08:37.390154   85117 ssh_runner.go:195] Run: openssl version
	I1017 19:08:37.397183   85117 command_runner.go:130] > OpenSSL 3.4.1 11 Feb 2025 (Library: OpenSSL 3.4.1 11 Feb 2025)
	I1017 19:08:37.397310   85117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/79439.pem && ln -fs /usr/share/ca-certificates/79439.pem /etc/ssl/certs/79439.pem"
	I1017 19:08:37.411628   85117 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/79439.pem
	I1017 19:08:37.417085   85117 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 17 19:05 /usr/share/ca-certificates/79439.pem
	I1017 19:08:37.417178   85117 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:05 /usr/share/ca-certificates/79439.pem
	I1017 19:08:37.417250   85117 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/79439.pem
	I1017 19:08:37.424962   85117 command_runner.go:130] > 51391683
	I1017 19:08:37.425158   85117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/79439.pem /etc/ssl/certs/51391683.0"
	I1017 19:08:37.437578   85117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/794392.pem && ln -fs /usr/share/ca-certificates/794392.pem /etc/ssl/certs/794392.pem"
	I1017 19:08:37.452363   85117 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/794392.pem
	I1017 19:08:37.458096   85117 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 17 19:05 /usr/share/ca-certificates/794392.pem
	I1017 19:08:37.458164   85117 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:05 /usr/share/ca-certificates/794392.pem
	I1017 19:08:37.458223   85117 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/794392.pem
	I1017 19:08:37.466074   85117 command_runner.go:130] > 3ec20f2e
	I1017 19:08:37.466249   85117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/794392.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 19:08:37.478828   85117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:08:37.493772   85117 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:08:37.499621   85117 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 17 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:08:37.499822   85117 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:08:37.499886   85117 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:08:37.507945   85117 command_runner.go:130] > b5213941
	I1017 19:08:37.508223   85117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:08:37.520563   85117 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:08:37.526401   85117 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:08:37.526439   85117 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1017 19:08:37.526449   85117 command_runner.go:130] > Device: 253,1	Inode: 1054372     Links: 1
	I1017 19:08:37.526460   85117 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1017 19:08:37.526477   85117 command_runner.go:130] > Access: 2025-10-17 19:06:07.267694920 +0000
	I1017 19:08:37.526489   85117 command_runner.go:130] > Modify: 2025-10-17 19:06:07.267694920 +0000
	I1017 19:08:37.526500   85117 command_runner.go:130] > Change: 2025-10-17 19:06:07.267694920 +0000
	I1017 19:08:37.526510   85117 command_runner.go:130] >  Birth: 2025-10-17 19:06:07.267694920 +0000
	I1017 19:08:37.526610   85117 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 19:08:37.533974   85117 command_runner.go:130] > Certificate will not expire
	I1017 19:08:37.534188   85117 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 19:08:37.541725   85117 command_runner.go:130] > Certificate will not expire
	I1017 19:08:37.541833   85117 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 19:08:37.549277   85117 command_runner.go:130] > Certificate will not expire
	I1017 19:08:37.549348   85117 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 19:08:37.556865   85117 command_runner.go:130] > Certificate will not expire
	I1017 19:08:37.556943   85117 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 19:08:37.564379   85117 command_runner.go:130] > Certificate will not expire
	I1017 19:08:37.564452   85117 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 19:08:37.571575   85117 command_runner.go:130] > Certificate will not expire
	I1017 19:08:37.571807   85117 kubeadm.go:400] StartCluster: {Name:functional-016863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.1 ClusterName:functional-016863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:08:37.571943   85117 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:08:37.572009   85117 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:08:37.614275   85117 command_runner.go:130] > 5052ee3b4b13e54f7516a211d580d31d7e4856f34ebe5b5bc8a1778244018fb0
	I1017 19:08:37.614306   85117 command_runner.go:130] > 56c02355399031d32d66f1780ee1bc7396eeb5eb1b454f946254fe345879e8e0
	I1017 19:08:37.614315   85117 command_runner.go:130] > 56048147246b1d30ce16d066a4bbb216f1f7c9b1459e21fa60ee108fdd3aa42a
	I1017 19:08:37.614325   85117 command_runner.go:130] > 1b1f7dfe245a6d20e55f02381f27ec11e1eec3bf32b8112aaab88ea95c008e93
	I1017 19:08:37.614332   85117 command_runner.go:130] > b4db2cb7b47399fb64d0f31922d185a1ae009961ae05b56d9514db6f489a25eb
	I1017 19:08:37.614340   85117 command_runner.go:130] > d6eeaf9720fb0a5853cabba8afe0f0c64370fd422e21db9af2a9b6ce4b9aecc1
	I1017 19:08:37.614347   85117 command_runner.go:130] > 26c7a235bb67e91ab6abf0c0282c65a526c3d2fc628ec6008956402a02d5b1e8
	I1017 19:08:37.614369   85117 command_runner.go:130] > 171e623260fdb36d39493caf1c0b8c10efb097287233e2565304b12ece716a85
	I1017 19:08:37.614383   85117 command_runner.go:130] > 0fe4cc88e7a7a757f4debf7f3ff8f76bef81d0a36e83bd994df86baa42f47a71
	I1017 19:08:37.614397   85117 command_runner.go:130] > 86dba9687f70280ffaa952d354e90ec1a4ff74d73869d9360c56690901ad9461
	I1017 19:08:37.614406   85117 command_runner.go:130] > 4d4ae675fa012cf6e18dd10516f8c83d32b364f3e27d8068722234a797bc7b1a
	I1017 19:08:37.614460   85117 cri.go:89] found id: "5052ee3b4b13e54f7516a211d580d31d7e4856f34ebe5b5bc8a1778244018fb0"
	I1017 19:08:37.614475   85117 cri.go:89] found id: "56c02355399031d32d66f1780ee1bc7396eeb5eb1b454f946254fe345879e8e0"
	I1017 19:08:37.614481   85117 cri.go:89] found id: "56048147246b1d30ce16d066a4bbb216f1f7c9b1459e21fa60ee108fdd3aa42a"
	I1017 19:08:37.614486   85117 cri.go:89] found id: "1b1f7dfe245a6d20e55f02381f27ec11e1eec3bf32b8112aaab88ea95c008e93"
	I1017 19:08:37.614490   85117 cri.go:89] found id: "b4db2cb7b47399fb64d0f31922d185a1ae009961ae05b56d9514db6f489a25eb"
	I1017 19:08:37.614498   85117 cri.go:89] found id: "d6eeaf9720fb0a5853cabba8afe0f0c64370fd422e21db9af2a9b6ce4b9aecc1"
	I1017 19:08:37.614513   85117 cri.go:89] found id: "26c7a235bb67e91ab6abf0c0282c65a526c3d2fc628ec6008956402a02d5b1e8"
	I1017 19:08:37.614519   85117 cri.go:89] found id: "171e623260fdb36d39493caf1c0b8c10efb097287233e2565304b12ece716a85"
	I1017 19:08:37.614521   85117 cri.go:89] found id: "0fe4cc88e7a7a757f4debf7f3ff8f76bef81d0a36e83bd994df86baa42f47a71"
	I1017 19:08:37.614530   85117 cri.go:89] found id: "86dba9687f70280ffaa952d354e90ec1a4ff74d73869d9360c56690901ad9461"
	I1017 19:08:37.614535   85117 cri.go:89] found id: "4d4ae675fa012cf6e18dd10516f8c83d32b364f3e27d8068722234a797bc7b1a"
	I1017 19:08:37.614538   85117 cri.go:89] found id: ""
	I1017 19:08:37.614600   85117 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-016863 -n functional-016863
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-016863 -n functional-016863: exit status 2 (236.707998ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-016863" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (393.40s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (396.61s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-016863 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-016863 get pods: exit status 1 (98.640983ms)

                                                
                                                
** stderr ** 
	E1017 19:40:45.548461   93192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.205:8441/api?timeout=32s\": dial tcp 192.168.39.205:8441: connect: connection refused"
	E1017 19:40:45.549012   93192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.205:8441/api?timeout=32s\": dial tcp 192.168.39.205:8441: connect: connection refused"
	E1017 19:40:45.550564   93192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.205:8441/api?timeout=32s\": dial tcp 192.168.39.205:8441: connect: connection refused"
	E1017 19:40:45.550958   93192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.205:8441/api?timeout=32s\": dial tcp 192.168.39.205:8441: connect: connection refused"
	E1017 19:40:45.552490   93192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.205:8441/api?timeout=32s\": dial tcp 192.168.39.205:8441: connect: connection refused"
	The connection to the server 192.168.39.205:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-016863 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-016863 -n functional-016863
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-016863 -n functional-016863: exit status 2 (220.291817ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-016863 logs -n 25
E1017 19:43:47.751005   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-016863 logs -n 25: (6m35.985063287s)
helpers_test.go:260: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                  ARGS                                                                   │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ pause   │ nospam-712449 --log_dir /tmp/nospam-712449 pause                                                                                        │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:04 UTC │ 17 Oct 25 19:04 UTC │
	│ unpause │ nospam-712449 --log_dir /tmp/nospam-712449 unpause                                                                                      │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:04 UTC │ 17 Oct 25 19:04 UTC │
	│ unpause │ nospam-712449 --log_dir /tmp/nospam-712449 unpause                                                                                      │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:04 UTC │ 17 Oct 25 19:04 UTC │
	│ unpause │ nospam-712449 --log_dir /tmp/nospam-712449 unpause                                                                                      │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:04 UTC │ 17 Oct 25 19:04 UTC │
	│ stop    │ nospam-712449 --log_dir /tmp/nospam-712449 stop                                                                                         │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:04 UTC │ 17 Oct 25 19:05 UTC │
	│ stop    │ nospam-712449 --log_dir /tmp/nospam-712449 stop                                                                                         │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:05 UTC │ 17 Oct 25 19:05 UTC │
	│ stop    │ nospam-712449 --log_dir /tmp/nospam-712449 stop                                                                                         │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:05 UTC │ 17 Oct 25 19:05 UTC │
	│ delete  │ -p nospam-712449                                                                                                                        │ nospam-712449     │ jenkins │ v1.37.0 │ 17 Oct 25 19:05 UTC │ 17 Oct 25 19:05 UTC │
	│ start   │ -p functional-016863 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ functional-016863 │ jenkins │ v1.37.0 │ 17 Oct 25 19:05 UTC │ 17 Oct 25 19:06 UTC │
	│ start   │ -p functional-016863 --alsologtostderr -v=8                                                                                             │ functional-016863 │ jenkins │ v1.37.0 │ 17 Oct 25 19:06 UTC │                     │
	│ cache   │ functional-016863 cache add registry.k8s.io/pause:3.1                                                                                   │ functional-016863 │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │ 17 Oct 25 19:34 UTC │
	│ cache   │ functional-016863 cache add registry.k8s.io/pause:3.3                                                                                   │ functional-016863 │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │ 17 Oct 25 19:34 UTC │
	│ cache   │ functional-016863 cache add registry.k8s.io/pause:latest                                                                                │ functional-016863 │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │ 17 Oct 25 19:34 UTC │
	│ cache   │ functional-016863 cache add minikube-local-cache-test:functional-016863                                                                 │ functional-016863 │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │ 17 Oct 25 19:34 UTC │
	│ cache   │ functional-016863 cache delete minikube-local-cache-test:functional-016863                                                              │ functional-016863 │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │ 17 Oct 25 19:34 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │ 17 Oct 25 19:34 UTC │
	│ cache   │ list                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │ 17 Oct 25 19:34 UTC │
	│ ssh     │ functional-016863 ssh sudo crictl images                                                                                                │ functional-016863 │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │ 17 Oct 25 19:34 UTC │
	│ ssh     │ functional-016863 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                      │ functional-016863 │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │ 17 Oct 25 19:34 UTC │
	│ ssh     │ functional-016863 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                 │ functional-016863 │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │                     │
	│ cache   │ functional-016863 cache reload                                                                                                          │ functional-016863 │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │ 17 Oct 25 19:34 UTC │
	│ ssh     │ functional-016863 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                 │ functional-016863 │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │ 17 Oct 25 19:34 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │ 17 Oct 25 19:34 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │ 17 Oct 25 19:34 UTC │
	│ kubectl │ functional-016863 kubectl -- --context functional-016863 get pods                                                                       │ functional-016863 │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:06:56
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:06:56.570682   85117 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:06:56.570809   85117 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:06:56.570820   85117 out.go:374] Setting ErrFile to fd 2...
	I1017 19:06:56.570826   85117 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:06:56.571105   85117 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-75534/.minikube/bin
	I1017 19:06:56.571578   85117 out.go:368] Setting JSON to false
	I1017 19:06:56.572426   85117 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6568,"bootTime":1760721449,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 19:06:56.572524   85117 start.go:141] virtualization: kvm guest
	I1017 19:06:56.574519   85117 out.go:179] * [functional-016863] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 19:06:56.575690   85117 notify.go:220] Checking for updates...
	I1017 19:06:56.575704   85117 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 19:06:56.577138   85117 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:06:56.578363   85117 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-75534/kubeconfig
	I1017 19:06:56.579669   85117 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-75534/.minikube
	I1017 19:06:56.581027   85117 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 19:06:56.582307   85117 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:06:56.583921   85117 config.go:182] Loaded profile config "functional-016863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:06:56.584037   85117 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:06:56.584492   85117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:06:56.584589   85117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:06:56.600478   85117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35877
	I1017 19:06:56.600991   85117 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:06:56.601750   85117 main.go:141] libmachine: Using API Version  1
	I1017 19:06:56.601786   85117 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:06:56.602161   85117 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:06:56.602390   85117 main.go:141] libmachine: (functional-016863) Calling .DriverName
	I1017 19:06:56.635697   85117 out.go:179] * Using the kvm2 driver based on existing profile
	I1017 19:06:56.637016   85117 start.go:305] selected driver: kvm2
	I1017 19:06:56.637040   85117 start.go:925] validating driver "kvm2" against &{Name:functional-016863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-016863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:06:56.637141   85117 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:06:56.637622   85117 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:06:56.637712   85117 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21753-75534/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1017 19:06:56.651574   85117 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1017 19:06:56.651619   85117 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21753-75534/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1017 19:06:56.665844   85117 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1017 19:06:56.666547   85117 cni.go:84] Creating CNI manager for ""
	I1017 19:06:56.666631   85117 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1017 19:06:56.666699   85117 start.go:349] cluster config:
	{Name:functional-016863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-016863 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:06:56.666812   85117 iso.go:125] acquiring lock: {Name:mk89d24a0bd9a0a8cf0564a4affa55e11eaff101 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:06:56.668638   85117 out.go:179] * Starting "functional-016863" primary control-plane node in "functional-016863" cluster
	I1017 19:06:56.669893   85117 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:06:56.669940   85117 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-75534/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 19:06:56.669951   85117 cache.go:58] Caching tarball of preloaded images
	I1017 19:06:56.670102   85117 preload.go:233] Found /home/jenkins/minikube-integration/21753-75534/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1017 19:06:56.670116   85117 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:06:56.670235   85117 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/config.json ...
	I1017 19:06:56.670445   85117 start.go:360] acquireMachinesLock for functional-016863: {Name:mke0c3abe726945d0c60793aa0bf26eb33df7fed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1017 19:06:56.670494   85117 start.go:364] duration metric: took 29.325µs to acquireMachinesLock for "functional-016863"
	I1017 19:06:56.670514   85117 start.go:96] Skipping create...Using existing machine configuration
	I1017 19:06:56.670524   85117 fix.go:54] fixHost starting: 
	I1017 19:06:56.670828   85117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:06:56.670877   85117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:06:56.683516   85117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42095
	I1017 19:06:56.683978   85117 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:06:56.684470   85117 main.go:141] libmachine: Using API Version  1
	I1017 19:06:56.684493   85117 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:06:56.684844   85117 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:06:56.685047   85117 main.go:141] libmachine: (functional-016863) Calling .DriverName
	I1017 19:06:56.685223   85117 main.go:141] libmachine: (functional-016863) Calling .GetState
	I1017 19:06:56.686913   85117 fix.go:112] recreateIfNeeded on functional-016863: state=Running err=<nil>
	W1017 19:06:56.686945   85117 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 19:06:56.688754   85117 out.go:252] * Updating the running kvm2 "functional-016863" VM ...
	I1017 19:06:56.688779   85117 machine.go:93] provisionDockerMachine start ...
	I1017 19:06:56.688795   85117 main.go:141] libmachine: (functional-016863) Calling .DriverName
	I1017 19:06:56.689021   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:06:56.691985   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:56.692501   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:06:56.692527   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:56.692713   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:06:56.692904   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:56.693142   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:56.693299   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:06:56.693474   85117 main.go:141] libmachine: Using SSH client type: native
	I1017 19:06:56.693724   85117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1017 19:06:56.693736   85117 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:06:56.799511   85117 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-016863
	
	I1017 19:06:56.799542   85117 main.go:141] libmachine: (functional-016863) Calling .GetMachineName
	I1017 19:06:56.799819   85117 buildroot.go:166] provisioning hostname "functional-016863"
	I1017 19:06:56.799862   85117 main.go:141] libmachine: (functional-016863) Calling .GetMachineName
	I1017 19:06:56.800154   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:06:56.803810   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:56.804342   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:06:56.804375   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:56.804593   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:06:56.804779   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:56.804950   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:56.805112   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:06:56.805279   85117 main.go:141] libmachine: Using SSH client type: native
	I1017 19:06:56.805490   85117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1017 19:06:56.805503   85117 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-016863 && echo "functional-016863" | sudo tee /etc/hostname
	I1017 19:06:56.929174   85117 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-016863
	
	I1017 19:06:56.929205   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:06:56.932429   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:56.932929   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:06:56.932954   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:56.933186   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:06:56.933423   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:56.933612   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:56.933826   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:06:56.934076   85117 main.go:141] libmachine: Using SSH client type: native
	I1017 19:06:56.934309   85117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1017 19:06:56.934326   85117 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-016863' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-016863/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-016863' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:06:57.042297   85117 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:06:57.042330   85117 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21753-75534/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-75534/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-75534/.minikube}
	I1017 19:06:57.042373   85117 buildroot.go:174] setting up certificates
	I1017 19:06:57.042382   85117 provision.go:84] configureAuth start
	I1017 19:06:57.042395   85117 main.go:141] libmachine: (functional-016863) Calling .GetMachineName
	I1017 19:06:57.042715   85117 main.go:141] libmachine: (functional-016863) Calling .GetIP
	I1017 19:06:57.045902   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:57.046469   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:06:57.046508   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:57.046778   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:06:57.049360   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:57.049857   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:06:57.049902   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:57.050076   85117 provision.go:143] copyHostCerts
	I1017 19:06:57.050123   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21753-75534/.minikube/ca.pem
	I1017 19:06:57.050183   85117 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-75534/.minikube/ca.pem, removing ...
	I1017 19:06:57.050205   85117 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-75534/.minikube/ca.pem
	I1017 19:06:57.050294   85117 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-75534/.minikube/ca.pem (1082 bytes)
	I1017 19:06:57.050425   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21753-75534/.minikube/cert.pem
	I1017 19:06:57.050463   85117 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-75534/.minikube/cert.pem, removing ...
	I1017 19:06:57.050473   85117 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-75534/.minikube/cert.pem
	I1017 19:06:57.050602   85117 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-75534/.minikube/cert.pem (1123 bytes)
	I1017 19:06:57.050772   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21753-75534/.minikube/key.pem
	I1017 19:06:57.050815   85117 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-75534/.minikube/key.pem, removing ...
	I1017 19:06:57.050825   85117 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-75534/.minikube/key.pem
	I1017 19:06:57.050881   85117 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-75534/.minikube/key.pem (1679 bytes)
	I1017 19:06:57.051013   85117 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-75534/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca-key.pem org=jenkins.functional-016863 san=[127.0.0.1 192.168.39.205 functional-016863 localhost minikube]
	I1017 19:06:57.269277   85117 provision.go:177] copyRemoteCerts
	I1017 19:06:57.269362   85117 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:06:57.269401   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:06:57.272458   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:57.272834   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:06:57.272866   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:57.273060   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:06:57.273266   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:57.273480   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:06:57.273640   85117 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/functional-016863/id_rsa Username:docker}
	I1017 19:06:57.362432   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1017 19:06:57.362511   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 19:06:57.412884   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1017 19:06:57.413107   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1017 19:06:57.450092   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1017 19:06:57.450212   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1017 19:06:57.486026   85117 provision.go:87] duration metric: took 443.605637ms to configureAuth
	I1017 19:06:57.486057   85117 buildroot.go:189] setting minikube options for container-runtime
	I1017 19:06:57.486228   85117 config.go:182] Loaded profile config "functional-016863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:06:57.486309   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:06:57.489476   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:57.489895   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:06:57.489928   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:06:57.490160   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:06:57.490354   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:57.490544   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:06:57.490703   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:06:57.490888   85117 main.go:141] libmachine: Using SSH client type: native
	I1017 19:06:57.491101   85117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1017 19:06:57.491114   85117 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:07:03.084984   85117 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:07:03.085021   85117 machine.go:96] duration metric: took 6.396234121s to provisionDockerMachine
	I1017 19:07:03.085042   85117 start.go:293] postStartSetup for "functional-016863" (driver="kvm2")
	I1017 19:07:03.085056   85117 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:07:03.085084   85117 main.go:141] libmachine: (functional-016863) Calling .DriverName
	I1017 19:07:03.085514   85117 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:07:03.085593   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:07:03.089211   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.089621   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:07:03.089655   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.089838   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:07:03.090055   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:07:03.090184   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:07:03.090354   85117 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/functional-016863/id_rsa Username:docker}
	I1017 19:07:03.173813   85117 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:07:03.179411   85117 command_runner.go:130] > NAME=Buildroot
	I1017 19:07:03.179437   85117 command_runner.go:130] > VERSION=2025.02-dirty
	I1017 19:07:03.179441   85117 command_runner.go:130] > ID=buildroot
	I1017 19:07:03.179446   85117 command_runner.go:130] > VERSION_ID=2025.02
	I1017 19:07:03.179452   85117 command_runner.go:130] > PRETTY_NAME="Buildroot 2025.02"
	I1017 19:07:03.179493   85117 info.go:137] Remote host: Buildroot 2025.02
	I1017 19:07:03.179508   85117 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-75534/.minikube/addons for local assets ...
	I1017 19:07:03.179595   85117 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-75534/.minikube/files for local assets ...
	I1017 19:07:03.179714   85117 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/ssl/certs/794392.pem -> 794392.pem in /etc/ssl/certs
	I1017 19:07:03.179729   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/ssl/certs/794392.pem -> /etc/ssl/certs/794392.pem
	I1017 19:07:03.179835   85117 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/test/nested/copy/79439/hosts -> hosts in /etc/test/nested/copy/79439
	I1017 19:07:03.179847   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/test/nested/copy/79439/hosts -> /etc/test/nested/copy/79439/hosts
	I1017 19:07:03.179893   85117 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/79439
	I1017 19:07:03.192128   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/ssl/certs/794392.pem --> /etc/ssl/certs/794392.pem (1708 bytes)
	I1017 19:07:03.223838   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/test/nested/copy/79439/hosts --> /etc/test/nested/copy/79439/hosts (40 bytes)
	I1017 19:07:03.313679   85117 start.go:296] duration metric: took 228.61978ms for postStartSetup
	I1017 19:07:03.313721   85117 fix.go:56] duration metric: took 6.643198174s for fixHost
	I1017 19:07:03.313742   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:07:03.317578   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.318077   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:07:03.318115   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.318367   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:07:03.318648   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:07:03.318838   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:07:03.319029   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:07:03.319295   85117 main.go:141] libmachine: Using SSH client type: native
	I1017 19:07:03.319597   85117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1017 19:07:03.319613   85117 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1017 19:07:03.479608   85117 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760728023.470011514
	
	I1017 19:07:03.479635   85117 fix.go:216] guest clock: 1760728023.470011514
	I1017 19:07:03.479642   85117 fix.go:229] Guest: 2025-10-17 19:07:03.470011514 +0000 UTC Remote: 2025-10-17 19:07:03.313724873 +0000 UTC m=+6.781586281 (delta=156.286641ms)
	I1017 19:07:03.479664   85117 fix.go:200] guest clock delta is within tolerance: 156.286641ms
	I1017 19:07:03.479671   85117 start.go:83] releasing machines lock for "functional-016863", held for 6.809163445s
	I1017 19:07:03.479692   85117 main.go:141] libmachine: (functional-016863) Calling .DriverName
	I1017 19:07:03.480016   85117 main.go:141] libmachine: (functional-016863) Calling .GetIP
	I1017 19:07:03.483255   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.483786   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:07:03.483830   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.484026   85117 main.go:141] libmachine: (functional-016863) Calling .DriverName
	I1017 19:07:03.484650   85117 main.go:141] libmachine: (functional-016863) Calling .DriverName
	I1017 19:07:03.484910   85117 main.go:141] libmachine: (functional-016863) Calling .DriverName
	I1017 19:07:03.485041   85117 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:07:03.485087   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:07:03.485146   85117 ssh_runner.go:195] Run: cat /version.json
	I1017 19:07:03.485170   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHHostname
	I1017 19:07:03.488247   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.488613   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.488732   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:07:03.488760   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.488948   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:07:03.489117   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:07:03.489150   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:07:03.489166   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:07:03.489373   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHPort
	I1017 19:07:03.489440   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:07:03.489584   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHKeyPath
	I1017 19:07:03.489660   85117 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/functional-016863/id_rsa Username:docker}
	I1017 19:07:03.489750   85117 main.go:141] libmachine: (functional-016863) Calling .GetSSHUsername
	I1017 19:07:03.489896   85117 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/functional-016863/id_rsa Username:docker}
	I1017 19:07:03.669674   85117 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1017 19:07:03.669755   85117 command_runner.go:130] > {"iso_version": "v1.37.0-1760609724-21757", "kicbase_version": "v0.0.48-1760363564-21724", "minikube_version": "v1.37.0", "commit": "fd6729aa481bc45098452b0ed0ffbe097c29d1bb"}
	I1017 19:07:03.669885   85117 ssh_runner.go:195] Run: systemctl --version
	I1017 19:07:03.691813   85117 command_runner.go:130] > systemd 256 (256.7)
	I1017 19:07:03.691879   85117 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP -LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT -LIBARCHIVE
	I1017 19:07:03.691965   85117 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:07:03.942910   85117 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1017 19:07:03.963385   85117 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1017 19:07:03.963654   85117 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:07:03.963723   85117 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:07:04.004504   85117 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 19:07:04.004543   85117 start.go:495] detecting cgroup driver to use...
	I1017 19:07:04.004649   85117 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:07:04.048623   85117 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:07:04.093677   85117 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:07:04.093751   85117 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:07:04.125946   85117 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:07:04.177031   85117 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:07:04.556434   85117 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:07:04.871840   85117 docker.go:234] disabling docker service ...
	I1017 19:07:04.871920   85117 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:07:04.914455   85117 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:07:04.944209   85117 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:07:05.273173   85117 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:07:05.563772   85117 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:07:05.602259   85117 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:07:05.639391   85117 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1017 19:07:05.639452   85117 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:07:05.639509   85117 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:07:05.662293   85117 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 19:07:05.662360   85117 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:07:05.681766   85117 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:07:05.702415   85117 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:07:05.723309   85117 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:07:05.743334   85117 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:07:05.758794   85117 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:07:05.777348   85117 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:07:05.792297   85117 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:07:05.810337   85117 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1017 19:07:05.810427   85117 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:07:05.829378   85117 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:07:06.061473   85117 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:08:36.459335   85117 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.39776602s)
	I1017 19:08:36.459402   85117 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:08:36.459487   85117 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:08:36.466176   85117 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1017 19:08:36.466208   85117 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1017 19:08:36.466216   85117 command_runner.go:130] > Device: 0,23	Inode: 1978        Links: 1
	I1017 19:08:36.466222   85117 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1017 19:08:36.466229   85117 command_runner.go:130] > Access: 2025-10-17 19:08:36.354383352 +0000
	I1017 19:08:36.466239   85117 command_runner.go:130] > Modify: 2025-10-17 19:08:36.274379788 +0000
	I1017 19:08:36.466245   85117 command_runner.go:130] > Change: 2025-10-17 19:08:36.274379788 +0000
	I1017 19:08:36.466267   85117 command_runner.go:130] >  Birth: 2025-10-17 19:08:36.274379788 +0000
	I1017 19:08:36.466319   85117 start.go:563] Will wait 60s for crictl version
	I1017 19:08:36.466390   85117 ssh_runner.go:195] Run: which crictl
	I1017 19:08:36.470951   85117 command_runner.go:130] > /usr/bin/crictl
	I1017 19:08:36.471037   85117 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1017 19:08:36.516077   85117 command_runner.go:130] > Version:  0.1.0
	I1017 19:08:36.516101   85117 command_runner.go:130] > RuntimeName:  cri-o
	I1017 19:08:36.516106   85117 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1017 19:08:36.516111   85117 command_runner.go:130] > RuntimeApiVersion:  v1
	I1017 19:08:36.516132   85117 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1017 19:08:36.516223   85117 ssh_runner.go:195] Run: crio --version
	I1017 19:08:36.548879   85117 command_runner.go:130] > crio version 1.29.1
	I1017 19:08:36.548904   85117 command_runner.go:130] > Version:        1.29.1
	I1017 19:08:36.548909   85117 command_runner.go:130] > GitCommit:      unknown
	I1017 19:08:36.548925   85117 command_runner.go:130] > GitCommitDate:  unknown
	I1017 19:08:36.548929   85117 command_runner.go:130] > GitTreeState:   clean
	I1017 19:08:36.548935   85117 command_runner.go:130] > BuildDate:      2025-10-16T13:23:57Z
	I1017 19:08:36.548939   85117 command_runner.go:130] > GoVersion:      go1.23.4
	I1017 19:08:36.548942   85117 command_runner.go:130] > Compiler:       gc
	I1017 19:08:36.548947   85117 command_runner.go:130] > Platform:       linux/amd64
	I1017 19:08:36.548951   85117 command_runner.go:130] > Linkmode:       dynamic
	I1017 19:08:36.548955   85117 command_runner.go:130] > BuildTags:      
	I1017 19:08:36.548959   85117 command_runner.go:130] >   containers_image_ostree_stub
	I1017 19:08:36.548963   85117 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1017 19:08:36.548966   85117 command_runner.go:130] >   btrfs_noversion
	I1017 19:08:36.548970   85117 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1017 19:08:36.548974   85117 command_runner.go:130] >   libdm_no_deferred_remove
	I1017 19:08:36.548978   85117 command_runner.go:130] >   seccomp
	I1017 19:08:36.548982   85117 command_runner.go:130] > LDFlags:          unknown
	I1017 19:08:36.549001   85117 command_runner.go:130] > SeccompEnabled:   true
	I1017 19:08:36.549005   85117 command_runner.go:130] > AppArmorEnabled:  false
	I1017 19:08:36.549081   85117 ssh_runner.go:195] Run: crio --version
	I1017 19:08:36.579072   85117 command_runner.go:130] > crio version 1.29.1
	I1017 19:08:36.579097   85117 command_runner.go:130] > Version:        1.29.1
	I1017 19:08:36.579102   85117 command_runner.go:130] > GitCommit:      unknown
	I1017 19:08:36.579106   85117 command_runner.go:130] > GitCommitDate:  unknown
	I1017 19:08:36.579109   85117 command_runner.go:130] > GitTreeState:   clean
	I1017 19:08:36.579114   85117 command_runner.go:130] > BuildDate:      2025-10-16T13:23:57Z
	I1017 19:08:36.579118   85117 command_runner.go:130] > GoVersion:      go1.23.4
	I1017 19:08:36.579122   85117 command_runner.go:130] > Compiler:       gc
	I1017 19:08:36.579126   85117 command_runner.go:130] > Platform:       linux/amd64
	I1017 19:08:36.579129   85117 command_runner.go:130] > Linkmode:       dynamic
	I1017 19:08:36.579133   85117 command_runner.go:130] > BuildTags:      
	I1017 19:08:36.579137   85117 command_runner.go:130] >   containers_image_ostree_stub
	I1017 19:08:36.579141   85117 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1017 19:08:36.579144   85117 command_runner.go:130] >   btrfs_noversion
	I1017 19:08:36.579148   85117 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1017 19:08:36.579152   85117 command_runner.go:130] >   libdm_no_deferred_remove
	I1017 19:08:36.579156   85117 command_runner.go:130] >   seccomp
	I1017 19:08:36.579159   85117 command_runner.go:130] > LDFlags:          unknown
	I1017 19:08:36.579162   85117 command_runner.go:130] > SeccompEnabled:   true
	I1017 19:08:36.579166   85117 command_runner.go:130] > AppArmorEnabled:  false
	I1017 19:08:36.581921   85117 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1017 19:08:36.583156   85117 main.go:141] libmachine: (functional-016863) Calling .GetIP
	I1017 19:08:36.586303   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:08:36.586761   85117 main.go:141] libmachine: (functional-016863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:76:08", ip: ""} in network mk-functional-016863: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:54 +0000 UTC Type:0 Mac:52:54:00:9b:76:08 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-016863 Clientid:01:52:54:00:9b:76:08}
	I1017 19:08:36.586791   85117 main.go:141] libmachine: (functional-016863) DBG | domain functional-016863 has defined IP address 192.168.39.205 and MAC address 52:54:00:9b:76:08 in network mk-functional-016863
	I1017 19:08:36.587045   85117 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1017 19:08:36.592096   85117 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1017 19:08:36.592194   85117 kubeadm.go:883] updating cluster {Name:functional-016863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.1 ClusterName:functional-016863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 19:08:36.592323   85117 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:08:36.592384   85117 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:08:36.644213   85117 command_runner.go:130] > {
	I1017 19:08:36.644235   85117 command_runner.go:130] >   "images": [
	I1017 19:08:36.644239   85117 command_runner.go:130] >     {
	I1017 19:08:36.644246   85117 command_runner.go:130] >       "id": "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1017 19:08:36.644251   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.644257   85117 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1017 19:08:36.644260   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644265   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.644287   85117 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1017 19:08:36.644298   85117 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1017 19:08:36.644304   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644310   85117 command_runner.go:130] >       "size": "109379124",
	I1017 19:08:36.644319   85117 command_runner.go:130] >       "uid": null,
	I1017 19:08:36.644328   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.644357   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.644368   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.644379   85117 command_runner.go:130] >     },
	I1017 19:08:36.644384   85117 command_runner.go:130] >     {
	I1017 19:08:36.644397   85117 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1017 19:08:36.644403   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.644412   85117 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1017 19:08:36.644418   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644429   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.644441   85117 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1017 19:08:36.644455   85117 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1017 19:08:36.644463   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644489   85117 command_runner.go:130] >       "size": "31470524",
	I1017 19:08:36.644500   85117 command_runner.go:130] >       "uid": null,
	I1017 19:08:36.644506   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.644517   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.644524   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.644532   85117 command_runner.go:130] >     },
	I1017 19:08:36.644537   85117 command_runner.go:130] >     {
	I1017 19:08:36.644546   85117 command_runner.go:130] >       "id": "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1017 19:08:36.644570   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.644577   85117 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1017 19:08:36.644586   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644592   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.644602   85117 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1017 19:08:36.644610   85117 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1017 19:08:36.644616   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644620   85117 command_runner.go:130] >       "size": "76103547",
	I1017 19:08:36.644623   85117 command_runner.go:130] >       "uid": null,
	I1017 19:08:36.644628   85117 command_runner.go:130] >       "username": "nonroot",
	I1017 19:08:36.644634   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.644638   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.644644   85117 command_runner.go:130] >     },
	I1017 19:08:36.644655   85117 command_runner.go:130] >     {
	I1017 19:08:36.644664   85117 command_runner.go:130] >       "id": "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1017 19:08:36.644668   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.644675   85117 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1017 19:08:36.644678   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644685   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.644692   85117 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1017 19:08:36.644707   85117 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1017 19:08:36.644713   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644716   85117 command_runner.go:130] >       "size": "195976448",
	I1017 19:08:36.644720   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.644726   85117 command_runner.go:130] >         "value": "0"
	I1017 19:08:36.644729   85117 command_runner.go:130] >       },
	I1017 19:08:36.644733   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.644737   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.644741   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.644744   85117 command_runner.go:130] >     },
	I1017 19:08:36.644747   85117 command_runner.go:130] >     {
	I1017 19:08:36.644753   85117 command_runner.go:130] >       "id": "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1017 19:08:36.644760   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.644764   85117 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1017 19:08:36.644767   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644772   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.644781   85117 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1017 19:08:36.644788   85117 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1017 19:08:36.644794   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644798   85117 command_runner.go:130] >       "size": "89046001",
	I1017 19:08:36.644802   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.644806   85117 command_runner.go:130] >         "value": "0"
	I1017 19:08:36.644810   85117 command_runner.go:130] >       },
	I1017 19:08:36.644813   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.644819   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.644822   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.644830   85117 command_runner.go:130] >     },
	I1017 19:08:36.644836   85117 command_runner.go:130] >     {
	I1017 19:08:36.644842   85117 command_runner.go:130] >       "id": "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1017 19:08:36.644845   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.644850   85117 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1017 19:08:36.644856   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644860   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.644868   85117 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1017 19:08:36.644877   85117 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1017 19:08:36.644880   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644884   85117 command_runner.go:130] >       "size": "76004181",
	I1017 19:08:36.644888   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.644892   85117 command_runner.go:130] >         "value": "0"
	I1017 19:08:36.644895   85117 command_runner.go:130] >       },
	I1017 19:08:36.644899   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.644902   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.644908   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.644911   85117 command_runner.go:130] >     },
	I1017 19:08:36.644914   85117 command_runner.go:130] >     {
	I1017 19:08:36.644920   85117 command_runner.go:130] >       "id": "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1017 19:08:36.644924   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.644928   85117 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1017 19:08:36.644932   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644944   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.644951   85117 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1017 19:08:36.644958   85117 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1017 19:08:36.644961   85117 command_runner.go:130] >       ],
	I1017 19:08:36.644964   85117 command_runner.go:130] >       "size": "73138073",
	I1017 19:08:36.644968   85117 command_runner.go:130] >       "uid": null,
	I1017 19:08:36.644972   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.644975   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.644979   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.644982   85117 command_runner.go:130] >     },
	I1017 19:08:36.644991   85117 command_runner.go:130] >     {
	I1017 19:08:36.644999   85117 command_runner.go:130] >       "id": "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1017 19:08:36.645003   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.645010   85117 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1017 19:08:36.645013   85117 command_runner.go:130] >       ],
	I1017 19:08:36.645017   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.645041   85117 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1017 19:08:36.645052   85117 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1017 19:08:36.645055   85117 command_runner.go:130] >       ],
	I1017 19:08:36.645059   85117 command_runner.go:130] >       "size": "53844823",
	I1017 19:08:36.645062   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.645066   85117 command_runner.go:130] >         "value": "0"
	I1017 19:08:36.645068   85117 command_runner.go:130] >       },
	I1017 19:08:36.645072   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.645075   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.645079   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.645081   85117 command_runner.go:130] >     },
	I1017 19:08:36.645084   85117 command_runner.go:130] >     {
	I1017 19:08:36.645090   85117 command_runner.go:130] >       "id": "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1017 19:08:36.645093   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.645097   85117 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1017 19:08:36.645100   85117 command_runner.go:130] >       ],
	I1017 19:08:36.645104   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.645110   85117 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1017 19:08:36.645116   85117 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1017 19:08:36.645120   85117 command_runner.go:130] >       ],
	I1017 19:08:36.645123   85117 command_runner.go:130] >       "size": "742092",
	I1017 19:08:36.645126   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.645129   85117 command_runner.go:130] >         "value": "65535"
	I1017 19:08:36.645132   85117 command_runner.go:130] >       },
	I1017 19:08:36.645136   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.645143   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.645147   85117 command_runner.go:130] >       "pinned": true
	I1017 19:08:36.645154   85117 command_runner.go:130] >     }
	I1017 19:08:36.645157   85117 command_runner.go:130] >   ]
	I1017 19:08:36.645160   85117 command_runner.go:130] > }
	I1017 19:08:36.645398   85117 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:08:36.645415   85117 crio.go:433] Images already preloaded, skipping extraction
	I1017 19:08:36.645478   85117 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:08:36.684800   85117 command_runner.go:130] > {
	I1017 19:08:36.684832   85117 command_runner.go:130] >   "images": [
	I1017 19:08:36.684855   85117 command_runner.go:130] >     {
	I1017 19:08:36.684869   85117 command_runner.go:130] >       "id": "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1017 19:08:36.684877   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.684887   85117 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1017 19:08:36.684892   85117 command_runner.go:130] >       ],
	I1017 19:08:36.684896   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.684909   85117 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1017 19:08:36.684916   85117 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1017 19:08:36.684919   85117 command_runner.go:130] >       ],
	I1017 19:08:36.684923   85117 command_runner.go:130] >       "size": "109379124",
	I1017 19:08:36.684927   85117 command_runner.go:130] >       "uid": null,
	I1017 19:08:36.684930   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.684935   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.684938   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.684942   85117 command_runner.go:130] >     },
	I1017 19:08:36.684945   85117 command_runner.go:130] >     {
	I1017 19:08:36.684950   85117 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1017 19:08:36.684955   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.684960   85117 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1017 19:08:36.684973   85117 command_runner.go:130] >       ],
	I1017 19:08:36.684980   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.684994   85117 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1017 19:08:36.685002   85117 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1017 19:08:36.685005   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685013   85117 command_runner.go:130] >       "size": "31470524",
	I1017 19:08:36.685018   85117 command_runner.go:130] >       "uid": null,
	I1017 19:08:36.685021   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.685025   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.685029   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.685032   85117 command_runner.go:130] >     },
	I1017 19:08:36.685035   85117 command_runner.go:130] >     {
	I1017 19:08:36.685041   85117 command_runner.go:130] >       "id": "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1017 19:08:36.685045   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.685055   85117 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1017 19:08:36.685061   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685064   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.685072   85117 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1017 19:08:36.685081   85117 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1017 19:08:36.685084   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685088   85117 command_runner.go:130] >       "size": "76103547",
	I1017 19:08:36.685092   85117 command_runner.go:130] >       "uid": null,
	I1017 19:08:36.685095   85117 command_runner.go:130] >       "username": "nonroot",
	I1017 19:08:36.685098   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.685105   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.685108   85117 command_runner.go:130] >     },
	I1017 19:08:36.685111   85117 command_runner.go:130] >     {
	I1017 19:08:36.685116   85117 command_runner.go:130] >       "id": "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1017 19:08:36.685121   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.685125   85117 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1017 19:08:36.685128   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685132   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.685140   85117 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1017 19:08:36.685152   85117 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1017 19:08:36.685158   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685162   85117 command_runner.go:130] >       "size": "195976448",
	I1017 19:08:36.685165   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.685169   85117 command_runner.go:130] >         "value": "0"
	I1017 19:08:36.685172   85117 command_runner.go:130] >       },
	I1017 19:08:36.685176   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.685179   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.685183   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.685186   85117 command_runner.go:130] >     },
	I1017 19:08:36.685195   85117 command_runner.go:130] >     {
	I1017 19:08:36.685202   85117 command_runner.go:130] >       "id": "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1017 19:08:36.685205   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.685209   85117 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1017 19:08:36.685217   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685224   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.685230   85117 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1017 19:08:36.685243   85117 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1017 19:08:36.685249   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685252   85117 command_runner.go:130] >       "size": "89046001",
	I1017 19:08:36.685256   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.685259   85117 command_runner.go:130] >         "value": "0"
	I1017 19:08:36.685263   85117 command_runner.go:130] >       },
	I1017 19:08:36.685266   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.685270   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.685274   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.685277   85117 command_runner.go:130] >     },
	I1017 19:08:36.685280   85117 command_runner.go:130] >     {
	I1017 19:08:36.685292   85117 command_runner.go:130] >       "id": "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1017 19:08:36.685301   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.685310   85117 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1017 19:08:36.685322   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685332   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.685344   85117 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1017 19:08:36.685361   85117 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1017 19:08:36.685371   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685378   85117 command_runner.go:130] >       "size": "76004181",
	I1017 19:08:36.685388   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.685394   85117 command_runner.go:130] >         "value": "0"
	I1017 19:08:36.685403   85117 command_runner.go:130] >       },
	I1017 19:08:36.685407   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.685414   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.685418   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.685421   85117 command_runner.go:130] >     },
	I1017 19:08:36.685424   85117 command_runner.go:130] >     {
	I1017 19:08:36.685430   85117 command_runner.go:130] >       "id": "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1017 19:08:36.685437   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.685448   85117 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1017 19:08:36.685454   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685457   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.685464   85117 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1017 19:08:36.685473   85117 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1017 19:08:36.685476   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685483   85117 command_runner.go:130] >       "size": "73138073",
	I1017 19:08:36.685487   85117 command_runner.go:130] >       "uid": null,
	I1017 19:08:36.685491   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.685495   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.685498   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.685502   85117 command_runner.go:130] >     },
	I1017 19:08:36.685505   85117 command_runner.go:130] >     {
	I1017 19:08:36.685511   85117 command_runner.go:130] >       "id": "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1017 19:08:36.685517   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.685522   85117 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1017 19:08:36.685528   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685531   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.685577   85117 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1017 19:08:36.685591   85117 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1017 19:08:36.685594   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685598   85117 command_runner.go:130] >       "size": "53844823",
	I1017 19:08:36.685601   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.685604   85117 command_runner.go:130] >         "value": "0"
	I1017 19:08:36.685607   85117 command_runner.go:130] >       },
	I1017 19:08:36.685611   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.685614   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.685618   85117 command_runner.go:130] >       "pinned": false
	I1017 19:08:36.685621   85117 command_runner.go:130] >     },
	I1017 19:08:36.685624   85117 command_runner.go:130] >     {
	I1017 19:08:36.685629   85117 command_runner.go:130] >       "id": "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1017 19:08:36.685638   85117 command_runner.go:130] >       "repoTags": [
	I1017 19:08:36.685642   85117 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1017 19:08:36.685651   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685658   85117 command_runner.go:130] >       "repoDigests": [
	I1017 19:08:36.685664   85117 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1017 19:08:36.685673   85117 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1017 19:08:36.685677   85117 command_runner.go:130] >       ],
	I1017 19:08:36.685680   85117 command_runner.go:130] >       "size": "742092",
	I1017 19:08:36.685684   85117 command_runner.go:130] >       "uid": {
	I1017 19:08:36.685688   85117 command_runner.go:130] >         "value": "65535"
	I1017 19:08:36.685691   85117 command_runner.go:130] >       },
	I1017 19:08:36.685697   85117 command_runner.go:130] >       "username": "",
	I1017 19:08:36.685700   85117 command_runner.go:130] >       "spec": null,
	I1017 19:08:36.685703   85117 command_runner.go:130] >       "pinned": true
	I1017 19:08:36.685706   85117 command_runner.go:130] >     }
	I1017 19:08:36.685711   85117 command_runner.go:130] >   ]
	I1017 19:08:36.685714   85117 command_runner.go:130] > }
	I1017 19:08:36.685822   85117 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:08:36.685834   85117 cache_images.go:85] Images are preloaded, skipping loading
	I1017 19:08:36.685842   85117 kubeadm.go:934] updating node { 192.168.39.205 8441 v1.34.1 crio true true} ...
	I1017 19:08:36.685955   85117 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-016863 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.205
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-016863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:08:36.686028   85117 ssh_runner.go:195] Run: crio config
	I1017 19:08:36.721698   85117 command_runner.go:130] ! time="2025-10-17 19:08:36.711815300Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1017 19:08:36.726934   85117 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1017 19:08:36.733071   85117 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1017 19:08:36.733099   85117 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1017 19:08:36.733109   85117 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1017 19:08:36.733113   85117 command_runner.go:130] > #
	I1017 19:08:36.733123   85117 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1017 19:08:36.733131   85117 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1017 19:08:36.733140   85117 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1017 19:08:36.733156   85117 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1017 19:08:36.733165   85117 command_runner.go:130] > # reload'.
	I1017 19:08:36.733177   85117 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1017 19:08:36.733189   85117 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1017 19:08:36.733199   85117 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1017 19:08:36.733209   85117 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1017 19:08:36.733222   85117 command_runner.go:130] > [crio]
	I1017 19:08:36.733230   85117 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1017 19:08:36.733234   85117 command_runner.go:130] > # containers images, in this directory.
	I1017 19:08:36.733241   85117 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1017 19:08:36.733256   85117 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1017 19:08:36.733263   85117 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1017 19:08:36.733270   85117 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1017 19:08:36.733277   85117 command_runner.go:130] > # imagestore = ""
	I1017 19:08:36.733283   85117 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1017 19:08:36.733291   85117 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1017 19:08:36.733296   85117 command_runner.go:130] > # storage_driver = "overlay"
	I1017 19:08:36.733307   85117 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1017 19:08:36.733320   85117 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1017 19:08:36.733327   85117 command_runner.go:130] > storage_option = [
	I1017 19:08:36.733337   85117 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1017 19:08:36.733342   85117 command_runner.go:130] > ]
	I1017 19:08:36.733354   85117 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1017 19:08:36.733363   85117 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1017 19:08:36.733368   85117 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1017 19:08:36.733374   85117 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1017 19:08:36.733380   85117 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1017 19:08:36.733387   85117 command_runner.go:130] > # always happen on a node reboot
	I1017 19:08:36.733391   85117 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1017 19:08:36.733411   85117 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1017 19:08:36.733424   85117 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1017 19:08:36.733432   85117 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1017 19:08:36.733443   85117 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1017 19:08:36.733456   85117 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1017 19:08:36.733470   85117 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1017 19:08:36.733480   85117 command_runner.go:130] > # internal_wipe = true
	I1017 19:08:36.733489   85117 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1017 19:08:36.733497   85117 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1017 19:08:36.733504   85117 command_runner.go:130] > # internal_repair = false
	I1017 19:08:36.733522   85117 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1017 19:08:36.733534   85117 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1017 19:08:36.733544   85117 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1017 19:08:36.733565   85117 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1017 19:08:36.733582   85117 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1017 19:08:36.733590   85117 command_runner.go:130] > [crio.api]
	I1017 19:08:36.733598   85117 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1017 19:08:36.733608   85117 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1017 19:08:36.733616   85117 command_runner.go:130] > # IP address on which the stream server will listen.
	I1017 19:08:36.733626   85117 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1017 19:08:36.733636   85117 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1017 19:08:36.733647   85117 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1017 19:08:36.733653   85117 command_runner.go:130] > # stream_port = "0"
	I1017 19:08:36.733665   85117 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1017 19:08:36.733671   85117 command_runner.go:130] > # stream_enable_tls = false
	I1017 19:08:36.733683   85117 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1017 19:08:36.733692   85117 command_runner.go:130] > # stream_idle_timeout = ""
	I1017 19:08:36.733699   85117 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1017 19:08:36.733709   85117 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1017 19:08:36.733719   85117 command_runner.go:130] > # minutes.
	I1017 19:08:36.733729   85117 command_runner.go:130] > # stream_tls_cert = ""
	I1017 19:08:36.733738   85117 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1017 19:08:36.733749   85117 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1017 19:08:36.733755   85117 command_runner.go:130] > # stream_tls_key = ""
	I1017 19:08:36.733767   85117 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1017 19:08:36.733777   85117 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1017 19:08:36.733807   85117 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1017 19:08:36.733817   85117 command_runner.go:130] > # stream_tls_ca = ""
	I1017 19:08:36.733828   85117 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1017 19:08:36.733839   85117 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1017 19:08:36.733850   85117 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1017 19:08:36.733860   85117 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1017 19:08:36.733870   85117 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1017 19:08:36.733888   85117 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1017 19:08:36.733894   85117 command_runner.go:130] > [crio.runtime]
	I1017 19:08:36.733902   85117 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1017 19:08:36.733914   85117 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1017 19:08:36.733923   85117 command_runner.go:130] > # "nofile=1024:2048"
	I1017 19:08:36.733936   85117 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1017 19:08:36.733945   85117 command_runner.go:130] > # default_ulimits = [
	I1017 19:08:36.733950   85117 command_runner.go:130] > # ]
	I1017 19:08:36.733961   85117 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1017 19:08:36.733966   85117 command_runner.go:130] > # no_pivot = false
	I1017 19:08:36.733974   85117 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1017 19:08:36.733984   85117 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1017 19:08:36.733990   85117 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1017 19:08:36.734005   85117 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1017 19:08:36.734017   85117 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1017 19:08:36.734041   85117 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1017 19:08:36.734050   85117 command_runner.go:130] > conmon = "/usr/bin/conmon"
	I1017 19:08:36.734057   85117 command_runner.go:130] > # Cgroup setting for conmon
	I1017 19:08:36.734070   85117 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1017 19:08:36.734079   85117 command_runner.go:130] > conmon_cgroup = "pod"
	I1017 19:08:36.734085   85117 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1017 19:08:36.734096   85117 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1017 19:08:36.734105   85117 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1017 19:08:36.734115   85117 command_runner.go:130] > conmon_env = [
	I1017 19:08:36.734124   85117 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1017 19:08:36.734133   85117 command_runner.go:130] > ]
	I1017 19:08:36.734142   85117 command_runner.go:130] > # Additional environment variables to set for all the
	I1017 19:08:36.734152   85117 command_runner.go:130] > # containers. These are overridden if set in the
	I1017 19:08:36.734161   85117 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1017 19:08:36.734170   85117 command_runner.go:130] > # default_env = [
	I1017 19:08:36.734175   85117 command_runner.go:130] > # ]
	I1017 19:08:36.734186   85117 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1017 19:08:36.734193   85117 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1017 19:08:36.734374   85117 command_runner.go:130] > # selinux = false
	I1017 19:08:36.734484   85117 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1017 19:08:36.734495   85117 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1017 19:08:36.734505   85117 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1017 19:08:36.734516   85117 command_runner.go:130] > # seccomp_profile = ""
	I1017 19:08:36.734531   85117 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1017 19:08:36.734543   85117 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1017 19:08:36.734567   85117 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1017 19:08:36.734585   85117 command_runner.go:130] > # which might increase security.
	I1017 19:08:36.734593   85117 command_runner.go:130] > # This option is currently deprecated,
	I1017 19:08:36.734610   85117 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1017 19:08:36.734624   85117 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1017 19:08:36.734634   85117 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1017 19:08:36.734646   85117 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1017 19:08:36.734697   85117 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1017 19:08:36.735591   85117 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1017 19:08:36.735609   85117 command_runner.go:130] > # This option supports live configuration reload.
	I1017 19:08:36.735623   85117 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1017 19:08:36.735636   85117 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1017 19:08:36.735643   85117 command_runner.go:130] > # the cgroup blockio controller.
	I1017 19:08:36.735656   85117 command_runner.go:130] > # blockio_config_file = ""
	I1017 19:08:36.735670   85117 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1017 19:08:36.735675   85117 command_runner.go:130] > # blockio parameters.
	I1017 19:08:36.735681   85117 command_runner.go:130] > # blockio_reload = false
	I1017 19:08:36.735706   85117 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1017 19:08:36.735733   85117 command_runner.go:130] > # irqbalance daemon.
	I1017 19:08:36.735812   85117 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1017 19:08:36.735833   85117 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1017 19:08:36.736170   85117 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1017 19:08:36.736193   85117 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1017 19:08:36.736203   85117 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1017 19:08:36.736229   85117 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1017 19:08:36.736240   85117 command_runner.go:130] > # This option supports live configuration reload.
	I1017 19:08:36.736246   85117 command_runner.go:130] > # rdt_config_file = ""
	I1017 19:08:36.736258   85117 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1017 19:08:36.736268   85117 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1017 19:08:36.736300   85117 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1017 19:08:36.736312   85117 command_runner.go:130] > # separate_pull_cgroup = ""
	I1017 19:08:36.736321   85117 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1017 19:08:36.736329   85117 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1017 19:08:36.736335   85117 command_runner.go:130] > # will be added.
	I1017 19:08:36.736341   85117 command_runner.go:130] > # default_capabilities = [
	I1017 19:08:36.736349   85117 command_runner.go:130] > # 	"CHOWN",
	I1017 19:08:36.736355   85117 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1017 19:08:36.736360   85117 command_runner.go:130] > # 	"FSETID",
	I1017 19:08:36.736366   85117 command_runner.go:130] > # 	"FOWNER",
	I1017 19:08:36.736374   85117 command_runner.go:130] > # 	"SETGID",
	I1017 19:08:36.736379   85117 command_runner.go:130] > # 	"SETUID",
	I1017 19:08:36.736384   85117 command_runner.go:130] > # 	"SETPCAP",
	I1017 19:08:36.736392   85117 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1017 19:08:36.736401   85117 command_runner.go:130] > # 	"KILL",
	I1017 19:08:36.736409   85117 command_runner.go:130] > # ]
	I1017 19:08:36.736420   85117 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1017 19:08:36.736433   85117 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1017 19:08:36.736444   85117 command_runner.go:130] > # add_inheritable_capabilities = false
	I1017 19:08:36.736452   85117 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1017 19:08:36.736463   85117 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1017 19:08:36.736472   85117 command_runner.go:130] > default_sysctls = [
	I1017 19:08:36.736482   85117 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1017 19:08:36.736490   85117 command_runner.go:130] > ]
	I1017 19:08:36.736501   85117 command_runner.go:130] > # List of devices on the host that a
	I1017 19:08:36.736513   85117 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1017 19:08:36.736521   85117 command_runner.go:130] > # allowed_devices = [
	I1017 19:08:36.736526   85117 command_runner.go:130] > # 	"/dev/fuse",
	I1017 19:08:36.736534   85117 command_runner.go:130] > # ]
	I1017 19:08:36.736541   85117 command_runner.go:130] > # List of additional devices. specified as
	I1017 19:08:36.736569   85117 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1017 19:08:36.736580   85117 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1017 19:08:36.736589   85117 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1017 19:08:36.736598   85117 command_runner.go:130] > # additional_devices = [
	I1017 19:08:36.736602   85117 command_runner.go:130] > # ]
	I1017 19:08:36.736612   85117 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1017 19:08:36.736621   85117 command_runner.go:130] > # cdi_spec_dirs = [
	I1017 19:08:36.736627   85117 command_runner.go:130] > # 	"/etc/cdi",
	I1017 19:08:36.736635   85117 command_runner.go:130] > # 	"/var/run/cdi",
	I1017 19:08:36.736640   85117 command_runner.go:130] > # ]
	I1017 19:08:36.736652   85117 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1017 19:08:36.736664   85117 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1017 19:08:36.736673   85117 command_runner.go:130] > # Defaults to false.
	I1017 19:08:36.736684   85117 command_runner.go:130] > # device_ownership_from_security_context = false
	I1017 19:08:36.736696   85117 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1017 19:08:36.736707   85117 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1017 19:08:36.736715   85117 command_runner.go:130] > # hooks_dir = [
	I1017 19:08:36.736723   85117 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1017 19:08:36.736732   85117 command_runner.go:130] > # ]
	I1017 19:08:36.736744   85117 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1017 19:08:36.736756   85117 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1017 19:08:36.736767   85117 command_runner.go:130] > # its default mounts from the following two files:
	I1017 19:08:36.736774   85117 command_runner.go:130] > #
	I1017 19:08:36.736783   85117 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1017 19:08:36.736795   85117 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1017 19:08:36.736809   85117 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1017 19:08:36.736817   85117 command_runner.go:130] > #
	I1017 19:08:36.736826   85117 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1017 19:08:36.736838   85117 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1017 19:08:36.736850   85117 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1017 19:08:36.736858   85117 command_runner.go:130] > #      only add mounts it finds in this file.
	I1017 19:08:36.736865   85117 command_runner.go:130] > #
	I1017 19:08:36.736871   85117 command_runner.go:130] > # default_mounts_file = ""
	I1017 19:08:36.736882   85117 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1017 19:08:36.736894   85117 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1017 19:08:36.736914   85117 command_runner.go:130] > pids_limit = 1024
	I1017 19:08:36.736938   85117 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1017 19:08:36.736957   85117 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1017 19:08:36.736976   85117 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1017 19:08:36.737004   85117 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1017 19:08:36.737015   85117 command_runner.go:130] > # log_size_max = -1
	I1017 19:08:36.737028   85117 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1017 19:08:36.737037   85117 command_runner.go:130] > # log_to_journald = false
	I1017 19:08:36.737051   85117 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1017 19:08:36.737062   85117 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1017 19:08:36.737073   85117 command_runner.go:130] > # Path to directory for container attach sockets.
	I1017 19:08:36.737084   85117 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1017 19:08:36.737094   85117 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1017 19:08:36.737102   85117 command_runner.go:130] > # bind_mount_prefix = ""
	I1017 19:08:36.737107   85117 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1017 19:08:36.737113   85117 command_runner.go:130] > # read_only = false
	I1017 19:08:36.737122   85117 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1017 19:08:36.737131   85117 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1017 19:08:36.737137   85117 command_runner.go:130] > # live configuration reload.
	I1017 19:08:36.737141   85117 command_runner.go:130] > # log_level = "info"
	I1017 19:08:36.737149   85117 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1017 19:08:36.737153   85117 command_runner.go:130] > # This option supports live configuration reload.
	I1017 19:08:36.737159   85117 command_runner.go:130] > # log_filter = ""
	I1017 19:08:36.737165   85117 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1017 19:08:36.737175   85117 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1017 19:08:36.737181   85117 command_runner.go:130] > # separated by comma.
	I1017 19:08:36.737189   85117 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1017 19:08:36.737199   85117 command_runner.go:130] > # uid_mappings = ""
	I1017 19:08:36.737214   85117 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1017 19:08:36.737222   85117 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1017 19:08:36.737227   85117 command_runner.go:130] > # separated by comma.
	I1017 19:08:36.737234   85117 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1017 19:08:36.737238   85117 command_runner.go:130] > # gid_mappings = ""
	I1017 19:08:36.737244   85117 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1017 19:08:36.737252   85117 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1017 19:08:36.737258   85117 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1017 19:08:36.737268   85117 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1017 19:08:36.737274   85117 command_runner.go:130] > # minimum_mappable_uid = -1
	I1017 19:08:36.737280   85117 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1017 19:08:36.737285   85117 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1017 19:08:36.737293   85117 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1017 19:08:36.737301   85117 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1017 19:08:36.737306   85117 command_runner.go:130] > # minimum_mappable_gid = -1
	I1017 19:08:36.737312   85117 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1017 19:08:36.737318   85117 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1017 19:08:36.737326   85117 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1017 19:08:36.737330   85117 command_runner.go:130] > # ctr_stop_timeout = 30
	I1017 19:08:36.737335   85117 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1017 19:08:36.737343   85117 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1017 19:08:36.737349   85117 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1017 19:08:36.737354   85117 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1017 19:08:36.737360   85117 command_runner.go:130] > drop_infra_ctr = false
	I1017 19:08:36.737365   85117 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1017 19:08:36.737370   85117 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1017 19:08:36.737377   85117 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1017 19:08:36.737382   85117 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1017 19:08:36.737388   85117 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1017 19:08:36.737396   85117 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1017 19:08:36.737402   85117 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1017 19:08:36.737409   85117 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1017 19:08:36.737412   85117 command_runner.go:130] > # shared_cpuset = ""
	I1017 19:08:36.737421   85117 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1017 19:08:36.737428   85117 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1017 19:08:36.737434   85117 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1017 19:08:36.737441   85117 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1017 19:08:36.737447   85117 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1017 19:08:36.737452   85117 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1017 19:08:36.737460   85117 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1017 19:08:36.737464   85117 command_runner.go:130] > # enable_criu_support = false
	I1017 19:08:36.737471   85117 command_runner.go:130] > # Enable/disable the generation of the container,
	I1017 19:08:36.737477   85117 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1017 19:08:36.737484   85117 command_runner.go:130] > # enable_pod_events = false
	I1017 19:08:36.737490   85117 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1017 19:08:36.737499   85117 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1017 19:08:36.737507   85117 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1017 19:08:36.737510   85117 command_runner.go:130] > # default_runtime = "runc"
	I1017 19:08:36.737518   85117 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1017 19:08:36.737525   85117 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1017 19:08:36.737537   85117 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1017 19:08:36.737545   85117 command_runner.go:130] > # creation as a file is not desired either.
	I1017 19:08:36.737567   85117 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1017 19:08:36.737578   85117 command_runner.go:130] > # the hostname is being managed dynamically.
	I1017 19:08:36.737585   85117 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1017 19:08:36.737590   85117 command_runner.go:130] > # ]
	I1017 19:08:36.737597   85117 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1017 19:08:36.737605   85117 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1017 19:08:36.737613   85117 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1017 19:08:36.737618   85117 command_runner.go:130] > # Each entry in the table should follow the format:
	I1017 19:08:36.737623   85117 command_runner.go:130] > #
	I1017 19:08:36.737628   85117 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1017 19:08:36.737635   85117 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1017 19:08:36.737639   85117 command_runner.go:130] > # runtime_type = "oci"
	I1017 19:08:36.737698   85117 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1017 19:08:36.737709   85117 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1017 19:08:36.737719   85117 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1017 19:08:36.737725   85117 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1017 19:08:36.737735   85117 command_runner.go:130] > # monitor_env = []
	I1017 19:08:36.737744   85117 command_runner.go:130] > # privileged_without_host_devices = false
	I1017 19:08:36.737748   85117 command_runner.go:130] > # allowed_annotations = []
	I1017 19:08:36.737754   85117 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1017 19:08:36.737763   85117 command_runner.go:130] > # Where:
	I1017 19:08:36.737771   85117 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1017 19:08:36.737778   85117 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1017 19:08:36.737786   85117 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1017 19:08:36.737794   85117 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1017 19:08:36.737798   85117 command_runner.go:130] > #   in $PATH.
	I1017 19:08:36.737803   85117 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1017 19:08:36.737810   85117 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1017 19:08:36.737816   85117 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1017 19:08:36.737821   85117 command_runner.go:130] > #   state.
	I1017 19:08:36.737828   85117 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1017 19:08:36.737836   85117 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1017 19:08:36.737842   85117 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1017 19:08:36.737849   85117 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1017 19:08:36.737856   85117 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1017 19:08:36.737865   85117 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1017 19:08:36.737872   85117 command_runner.go:130] > #   The currently recognized values are:
	I1017 19:08:36.737878   85117 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1017 19:08:36.737892   85117 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1017 19:08:36.737900   85117 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1017 19:08:36.737906   85117 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1017 19:08:36.737916   85117 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1017 19:08:36.737925   85117 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1017 19:08:36.737935   85117 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1017 19:08:36.737943   85117 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1017 19:08:36.737951   85117 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1017 19:08:36.737958   85117 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1017 19:08:36.737966   85117 command_runner.go:130] > #   deprecated option "conmon".
	I1017 19:08:36.737973   85117 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1017 19:08:36.737981   85117 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1017 19:08:36.737987   85117 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1017 19:08:36.737995   85117 command_runner.go:130] > #   should be moved to the container's cgroup
	I1017 19:08:36.738001   85117 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1017 19:08:36.738010   85117 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1017 19:08:36.738019   85117 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1017 19:08:36.738027   85117 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1017 19:08:36.738030   85117 command_runner.go:130] > #
	I1017 19:08:36.738038   85117 command_runner.go:130] > # Using the seccomp notifier feature:
	I1017 19:08:36.738041   85117 command_runner.go:130] > #
	I1017 19:08:36.738046   85117 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1017 19:08:36.738055   85117 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1017 19:08:36.738060   85117 command_runner.go:130] > #
	I1017 19:08:36.738067   85117 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1017 19:08:36.738075   85117 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1017 19:08:36.738080   85117 command_runner.go:130] > #
	I1017 19:08:36.738086   85117 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1017 19:08:36.738090   85117 command_runner.go:130] > # feature.
	I1017 19:08:36.738092   85117 command_runner.go:130] > #
	I1017 19:08:36.738100   85117 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1017 19:08:36.738108   85117 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1017 19:08:36.738114   85117 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1017 19:08:36.738123   85117 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1017 19:08:36.738132   85117 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1017 19:08:36.738137   85117 command_runner.go:130] > #
	I1017 19:08:36.738143   85117 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1017 19:08:36.738151   85117 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1017 19:08:36.738156   85117 command_runner.go:130] > #
	I1017 19:08:36.738162   85117 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1017 19:08:36.738169   85117 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1017 19:08:36.738172   85117 command_runner.go:130] > #
	I1017 19:08:36.738178   85117 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1017 19:08:36.738186   85117 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1017 19:08:36.738190   85117 command_runner.go:130] > # limitation.
	I1017 19:08:36.738198   85117 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1017 19:08:36.738202   85117 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1017 19:08:36.738212   85117 command_runner.go:130] > runtime_type = "oci"
	I1017 19:08:36.738218   85117 command_runner.go:130] > runtime_root = "/run/runc"
	I1017 19:08:36.738222   85117 command_runner.go:130] > runtime_config_path = ""
	I1017 19:08:36.738228   85117 command_runner.go:130] > monitor_path = "/usr/bin/conmon"
	I1017 19:08:36.738233   85117 command_runner.go:130] > monitor_cgroup = "pod"
	I1017 19:08:36.738239   85117 command_runner.go:130] > monitor_exec_cgroup = ""
	I1017 19:08:36.738242   85117 command_runner.go:130] > monitor_env = [
	I1017 19:08:36.738250   85117 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1017 19:08:36.738253   85117 command_runner.go:130] > ]
	I1017 19:08:36.738258   85117 command_runner.go:130] > privileged_without_host_devices = false
	I1017 19:08:36.738270   85117 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1017 19:08:36.738277   85117 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1017 19:08:36.738283   85117 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1017 19:08:36.738302   85117 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1017 19:08:36.738315   85117 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1017 19:08:36.738320   85117 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1017 19:08:36.738331   85117 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1017 19:08:36.738339   85117 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1017 19:08:36.738347   85117 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1017 19:08:36.738354   85117 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1017 19:08:36.738359   85117 command_runner.go:130] > # Example:
	I1017 19:08:36.738364   85117 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1017 19:08:36.738368   85117 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1017 19:08:36.738373   85117 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1017 19:08:36.738378   85117 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1017 19:08:36.738381   85117 command_runner.go:130] > # cpuset = 0
	I1017 19:08:36.738384   85117 command_runner.go:130] > # cpushares = "0-1"
	I1017 19:08:36.738388   85117 command_runner.go:130] > # Where:
	I1017 19:08:36.738392   85117 command_runner.go:130] > # The workload name is workload-type.
	I1017 19:08:36.738399   85117 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1017 19:08:36.738406   85117 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1017 19:08:36.738411   85117 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1017 19:08:36.738419   85117 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1017 19:08:36.738427   85117 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1017 19:08:36.738431   85117 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1017 19:08:36.738437   85117 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1017 19:08:36.738443   85117 command_runner.go:130] > # Default value is set to true
	I1017 19:08:36.738447   85117 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1017 19:08:36.738454   85117 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1017 19:08:36.738459   85117 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1017 19:08:36.738465   85117 command_runner.go:130] > # Default value is set to 'false'
	I1017 19:08:36.738470   85117 command_runner.go:130] > # disable_hostport_mapping = false
	I1017 19:08:36.738478   85117 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1017 19:08:36.738484   85117 command_runner.go:130] > #
	I1017 19:08:36.738489   85117 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1017 19:08:36.738500   85117 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1017 19:08:36.738508   85117 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1017 19:08:36.738517   85117 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1017 19:08:36.738522   85117 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1017 19:08:36.738529   85117 command_runner.go:130] > [crio.image]
	I1017 19:08:36.738535   85117 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1017 19:08:36.738541   85117 command_runner.go:130] > # default_transport = "docker://"
	I1017 19:08:36.738547   85117 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1017 19:08:36.738573   85117 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1017 19:08:36.738580   85117 command_runner.go:130] > # global_auth_file = ""
	I1017 19:08:36.738589   85117 command_runner.go:130] > # The image used to instantiate infra containers.
	I1017 19:08:36.738594   85117 command_runner.go:130] > # This option supports live configuration reload.
	I1017 19:08:36.738601   85117 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10.1"
	I1017 19:08:36.738608   85117 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1017 19:08:36.738616   85117 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1017 19:08:36.738622   85117 command_runner.go:130] > # This option supports live configuration reload.
	I1017 19:08:36.738626   85117 command_runner.go:130] > # pause_image_auth_file = ""
	I1017 19:08:36.738634   85117 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1017 19:08:36.738642   85117 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1017 19:08:36.738648   85117 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1017 19:08:36.738656   85117 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1017 19:08:36.738660   85117 command_runner.go:130] > # pause_command = "/pause"
	I1017 19:08:36.738668   85117 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1017 19:08:36.738674   85117 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1017 19:08:36.738690   85117 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1017 19:08:36.738700   85117 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1017 19:08:36.738709   85117 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1017 19:08:36.738718   85117 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1017 19:08:36.738722   85117 command_runner.go:130] > # pinned_images = [
	I1017 19:08:36.738727   85117 command_runner.go:130] > # ]
	I1017 19:08:36.738734   85117 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1017 19:08:36.738742   85117 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1017 19:08:36.738748   85117 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1017 19:08:36.738756   85117 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1017 19:08:36.738762   85117 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1017 19:08:36.738768   85117 command_runner.go:130] > # signature_policy = ""
	I1017 19:08:36.738773   85117 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1017 19:08:36.738781   85117 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1017 19:08:36.738787   85117 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1017 19:08:36.738792   85117 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1017 19:08:36.738798   85117 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1017 19:08:36.738802   85117 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1017 19:08:36.738808   85117 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1017 19:08:36.738813   85117 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1017 19:08:36.738817   85117 command_runner.go:130] > # changing them here.
	I1017 19:08:36.738820   85117 command_runner.go:130] > # insecure_registries = [
	I1017 19:08:36.738823   85117 command_runner.go:130] > # ]
	I1017 19:08:36.738828   85117 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1017 19:08:36.738833   85117 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1017 19:08:36.738836   85117 command_runner.go:130] > # image_volumes = "mkdir"
	I1017 19:08:36.738841   85117 command_runner.go:130] > # Temporary directory to use for storing big files
	I1017 19:08:36.738845   85117 command_runner.go:130] > # big_files_temporary_dir = ""
	I1017 19:08:36.738850   85117 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1017 19:08:36.738853   85117 command_runner.go:130] > # CNI plugins.
	I1017 19:08:36.738856   85117 command_runner.go:130] > [crio.network]
	I1017 19:08:36.738861   85117 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1017 19:08:36.738869   85117 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1017 19:08:36.738873   85117 command_runner.go:130] > # cni_default_network = ""
	I1017 19:08:36.738880   85117 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1017 19:08:36.738884   85117 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1017 19:08:36.738892   85117 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1017 19:08:36.738895   85117 command_runner.go:130] > # plugin_dirs = [
	I1017 19:08:36.738901   85117 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1017 19:08:36.738904   85117 command_runner.go:130] > # ]
	I1017 19:08:36.738909   85117 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1017 19:08:36.738915   85117 command_runner.go:130] > [crio.metrics]
	I1017 19:08:36.738919   85117 command_runner.go:130] > # Globally enable or disable metrics support.
	I1017 19:08:36.738925   85117 command_runner.go:130] > enable_metrics = true
	I1017 19:08:36.738929   85117 command_runner.go:130] > # Specify enabled metrics collectors.
	I1017 19:08:36.738939   85117 command_runner.go:130] > # Per default all metrics are enabled.
	I1017 19:08:36.738948   85117 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1017 19:08:36.738957   85117 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1017 19:08:36.738966   85117 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1017 19:08:36.738969   85117 command_runner.go:130] > # metrics_collectors = [
	I1017 19:08:36.738975   85117 command_runner.go:130] > # 	"operations",
	I1017 19:08:36.738980   85117 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1017 19:08:36.738988   85117 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1017 19:08:36.738992   85117 command_runner.go:130] > # 	"operations_errors",
	I1017 19:08:36.738998   85117 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1017 19:08:36.739002   85117 command_runner.go:130] > # 	"image_pulls_by_name",
	I1017 19:08:36.739008   85117 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1017 19:08:36.739012   85117 command_runner.go:130] > # 	"image_pulls_failures",
	I1017 19:08:36.739019   85117 command_runner.go:130] > # 	"image_pulls_successes",
	I1017 19:08:36.739022   85117 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1017 19:08:36.739029   85117 command_runner.go:130] > # 	"image_layer_reuse",
	I1017 19:08:36.739033   85117 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1017 19:08:36.739037   85117 command_runner.go:130] > # 	"containers_oom_total",
	I1017 19:08:36.739041   85117 command_runner.go:130] > # 	"containers_oom",
	I1017 19:08:36.739047   85117 command_runner.go:130] > # 	"processes_defunct",
	I1017 19:08:36.739050   85117 command_runner.go:130] > # 	"operations_total",
	I1017 19:08:36.739057   85117 command_runner.go:130] > # 	"operations_latency_seconds",
	I1017 19:08:36.739061   85117 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1017 19:08:36.739068   85117 command_runner.go:130] > # 	"operations_errors_total",
	I1017 19:08:36.739071   85117 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1017 19:08:36.739078   85117 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1017 19:08:36.739082   85117 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1017 19:08:36.739088   85117 command_runner.go:130] > # 	"image_pulls_success_total",
	I1017 19:08:36.739092   85117 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1017 19:08:36.739099   85117 command_runner.go:130] > # 	"containers_oom_count_total",
	I1017 19:08:36.739103   85117 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1017 19:08:36.739110   85117 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1017 19:08:36.739112   85117 command_runner.go:130] > # ]
	I1017 19:08:36.739119   85117 command_runner.go:130] > # The port on which the metrics server will listen.
	I1017 19:08:36.739125   85117 command_runner.go:130] > # metrics_port = 9090
	I1017 19:08:36.739132   85117 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1017 19:08:36.739136   85117 command_runner.go:130] > # metrics_socket = ""
	I1017 19:08:36.739143   85117 command_runner.go:130] > # The certificate for the secure metrics server.
	I1017 19:08:36.739148   85117 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1017 19:08:36.739156   85117 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1017 19:08:36.739161   85117 command_runner.go:130] > # certificate on any modification event.
	I1017 19:08:36.739165   85117 command_runner.go:130] > # metrics_cert = ""
	I1017 19:08:36.739170   85117 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1017 19:08:36.739176   85117 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1017 19:08:36.739180   85117 command_runner.go:130] > # metrics_key = ""
	I1017 19:08:36.739188   85117 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1017 19:08:36.739191   85117 command_runner.go:130] > [crio.tracing]
	I1017 19:08:36.739200   85117 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1017 19:08:36.739203   85117 command_runner.go:130] > # enable_tracing = false
	I1017 19:08:36.739214   85117 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1017 19:08:36.739221   85117 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1017 19:08:36.739227   85117 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1017 19:08:36.739240   85117 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1017 19:08:36.739246   85117 command_runner.go:130] > # CRI-O NRI configuration.
	I1017 19:08:36.739250   85117 command_runner.go:130] > [crio.nri]
	I1017 19:08:36.739254   85117 command_runner.go:130] > # Globally enable or disable NRI.
	I1017 19:08:36.739260   85117 command_runner.go:130] > # enable_nri = false
	I1017 19:08:36.739264   85117 command_runner.go:130] > # NRI socket to listen on.
	I1017 19:08:36.739271   85117 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1017 19:08:36.739275   85117 command_runner.go:130] > # NRI plugin directory to use.
	I1017 19:08:36.739280   85117 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1017 19:08:36.739287   85117 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1017 19:08:36.739291   85117 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1017 19:08:36.739299   85117 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1017 19:08:36.739303   85117 command_runner.go:130] > # nri_disable_connections = false
	I1017 19:08:36.739310   85117 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1017 19:08:36.739315   85117 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1017 19:08:36.739325   85117 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1017 19:08:36.739332   85117 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1017 19:08:36.739337   85117 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1017 19:08:36.739343   85117 command_runner.go:130] > [crio.stats]
	I1017 19:08:36.739348   85117 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1017 19:08:36.739353   85117 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1017 19:08:36.739360   85117 command_runner.go:130] > # stats_collection_period = 0
	I1017 19:08:36.739439   85117 cni.go:84] Creating CNI manager for ""
	I1017 19:08:36.739451   85117 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1017 19:08:36.739480   85117 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 19:08:36.739504   85117 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.205 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-016863 NodeName:functional-016863 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.205"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.205 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 19:08:36.739644   85117 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.205
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-016863"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.205"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.205"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 19:08:36.739707   85117 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:08:36.752377   85117 command_runner.go:130] > kubeadm
	I1017 19:08:36.752404   85117 command_runner.go:130] > kubectl
	I1017 19:08:36.752408   85117 command_runner.go:130] > kubelet
	I1017 19:08:36.752864   85117 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:08:36.752933   85117 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 19:08:36.764722   85117 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1017 19:08:36.786673   85117 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:08:36.808021   85117 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1017 19:08:36.828821   85117 ssh_runner.go:195] Run: grep 192.168.39.205	control-plane.minikube.internal$ /etc/hosts
	I1017 19:08:36.833177   85117 command_runner.go:130] > 192.168.39.205	control-plane.minikube.internal
	I1017 19:08:36.833246   85117 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:08:37.010934   85117 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:08:37.030439   85117 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863 for IP: 192.168.39.205
	I1017 19:08:37.030467   85117 certs.go:195] generating shared ca certs ...
	I1017 19:08:37.030485   85117 certs.go:227] acquiring lock for ca certs: {Name:mka410ab7d3b92eaaa0d0545223807c0ba196baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:08:37.030690   85117 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-75534/.minikube/ca.key
	I1017 19:08:37.030747   85117 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-75534/.minikube/proxy-client-ca.key
	I1017 19:08:37.030762   85117 certs.go:257] generating profile certs ...
	I1017 19:08:37.030878   85117 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/client.key
	I1017 19:08:37.030972   85117 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/apiserver.key.c24585d5
	I1017 19:08:37.031049   85117 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/proxy-client.key
	I1017 19:08:37.031067   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1017 19:08:37.031086   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1017 19:08:37.031102   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1017 19:08:37.031121   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1017 19:08:37.031138   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1017 19:08:37.031155   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1017 19:08:37.031179   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1017 19:08:37.031195   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1017 19:08:37.031270   85117 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/79439.pem (1338 bytes)
	W1017 19:08:37.031314   85117 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-75534/.minikube/certs/79439_empty.pem, impossibly tiny 0 bytes
	I1017 19:08:37.031328   85117 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca-key.pem (1679 bytes)
	I1017 19:08:37.031364   85117 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem (1082 bytes)
	I1017 19:08:37.031395   85117 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:08:37.031426   85117 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/key.pem (1679 bytes)
	I1017 19:08:37.031478   85117 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/ssl/certs/794392.pem (1708 bytes)
	I1017 19:08:37.031518   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/ssl/certs/794392.pem -> /usr/share/ca-certificates/794392.pem
	I1017 19:08:37.031537   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:08:37.031564   85117 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/79439.pem -> /usr/share/ca-certificates/79439.pem
	I1017 19:08:37.032341   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:08:37.064212   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 19:08:37.094935   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:08:37.126973   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 19:08:37.157540   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1017 19:08:37.187168   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 19:08:37.217543   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 19:08:37.247400   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/functional-016863/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 19:08:37.278758   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/ssl/certs/794392.pem --> /usr/share/ca-certificates/794392.pem (1708 bytes)
	I1017 19:08:37.308088   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:08:37.338377   85117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/certs/79439.pem --> /usr/share/ca-certificates/79439.pem (1338 bytes)
	I1017 19:08:37.369350   85117 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 19:08:37.390154   85117 ssh_runner.go:195] Run: openssl version
	I1017 19:08:37.397183   85117 command_runner.go:130] > OpenSSL 3.4.1 11 Feb 2025 (Library: OpenSSL 3.4.1 11 Feb 2025)
	I1017 19:08:37.397310   85117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/79439.pem && ln -fs /usr/share/ca-certificates/79439.pem /etc/ssl/certs/79439.pem"
	I1017 19:08:37.411628   85117 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/79439.pem
	I1017 19:08:37.417085   85117 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 17 19:05 /usr/share/ca-certificates/79439.pem
	I1017 19:08:37.417178   85117 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:05 /usr/share/ca-certificates/79439.pem
	I1017 19:08:37.417250   85117 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/79439.pem
	I1017 19:08:37.424962   85117 command_runner.go:130] > 51391683
	I1017 19:08:37.425158   85117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/79439.pem /etc/ssl/certs/51391683.0"
	I1017 19:08:37.437578   85117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/794392.pem && ln -fs /usr/share/ca-certificates/794392.pem /etc/ssl/certs/794392.pem"
	I1017 19:08:37.452363   85117 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/794392.pem
	I1017 19:08:37.458096   85117 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 17 19:05 /usr/share/ca-certificates/794392.pem
	I1017 19:08:37.458164   85117 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:05 /usr/share/ca-certificates/794392.pem
	I1017 19:08:37.458223   85117 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/794392.pem
	I1017 19:08:37.466074   85117 command_runner.go:130] > 3ec20f2e
	I1017 19:08:37.466249   85117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/794392.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 19:08:37.478828   85117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:08:37.493772   85117 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:08:37.499621   85117 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 17 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:08:37.499822   85117 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:08:37.499886   85117 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:08:37.507945   85117 command_runner.go:130] > b5213941
	I1017 19:08:37.508223   85117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:08:37.520563   85117 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:08:37.526401   85117 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:08:37.526439   85117 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1017 19:08:37.526449   85117 command_runner.go:130] > Device: 253,1	Inode: 1054372     Links: 1
	I1017 19:08:37.526460   85117 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1017 19:08:37.526477   85117 command_runner.go:130] > Access: 2025-10-17 19:06:07.267694920 +0000
	I1017 19:08:37.526489   85117 command_runner.go:130] > Modify: 2025-10-17 19:06:07.267694920 +0000
	I1017 19:08:37.526500   85117 command_runner.go:130] > Change: 2025-10-17 19:06:07.267694920 +0000
	I1017 19:08:37.526510   85117 command_runner.go:130] >  Birth: 2025-10-17 19:06:07.267694920 +0000
	I1017 19:08:37.526610   85117 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 19:08:37.533974   85117 command_runner.go:130] > Certificate will not expire
	I1017 19:08:37.534188   85117 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 19:08:37.541725   85117 command_runner.go:130] > Certificate will not expire
	I1017 19:08:37.541833   85117 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 19:08:37.549277   85117 command_runner.go:130] > Certificate will not expire
	I1017 19:08:37.549348   85117 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 19:08:37.556865   85117 command_runner.go:130] > Certificate will not expire
	I1017 19:08:37.556943   85117 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 19:08:37.564379   85117 command_runner.go:130] > Certificate will not expire
	I1017 19:08:37.564452   85117 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 19:08:37.571575   85117 command_runner.go:130] > Certificate will not expire
	I1017 19:08:37.571807   85117 kubeadm.go:400] StartCluster: {Name:functional-016863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.1 ClusterName:functional-016863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:08:37.571943   85117 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:08:37.572009   85117 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:08:37.614275   85117 command_runner.go:130] > 5052ee3b4b13e54f7516a211d580d31d7e4856f34ebe5b5bc8a1778244018fb0
	I1017 19:08:37.614306   85117 command_runner.go:130] > 56c02355399031d32d66f1780ee1bc7396eeb5eb1b454f946254fe345879e8e0
	I1017 19:08:37.614315   85117 command_runner.go:130] > 56048147246b1d30ce16d066a4bbb216f1f7c9b1459e21fa60ee108fdd3aa42a
	I1017 19:08:37.614325   85117 command_runner.go:130] > 1b1f7dfe245a6d20e55f02381f27ec11e1eec3bf32b8112aaab88ea95c008e93
	I1017 19:08:37.614332   85117 command_runner.go:130] > b4db2cb7b47399fb64d0f31922d185a1ae009961ae05b56d9514db6f489a25eb
	I1017 19:08:37.614340   85117 command_runner.go:130] > d6eeaf9720fb0a5853cabba8afe0f0c64370fd422e21db9af2a9b6ce4b9aecc1
	I1017 19:08:37.614347   85117 command_runner.go:130] > 26c7a235bb67e91ab6abf0c0282c65a526c3d2fc628ec6008956402a02d5b1e8
	I1017 19:08:37.614369   85117 command_runner.go:130] > 171e623260fdb36d39493caf1c0b8c10efb097287233e2565304b12ece716a85
	I1017 19:08:37.614383   85117 command_runner.go:130] > 0fe4cc88e7a7a757f4debf7f3ff8f76bef81d0a36e83bd994df86baa42f47a71
	I1017 19:08:37.614397   85117 command_runner.go:130] > 86dba9687f70280ffaa952d354e90ec1a4ff74d73869d9360c56690901ad9461
	I1017 19:08:37.614406   85117 command_runner.go:130] > 4d4ae675fa012cf6e18dd10516f8c83d32b364f3e27d8068722234a797bc7b1a
	I1017 19:08:37.614460   85117 cri.go:89] found id: "5052ee3b4b13e54f7516a211d580d31d7e4856f34ebe5b5bc8a1778244018fb0"
	I1017 19:08:37.614475   85117 cri.go:89] found id: "56c02355399031d32d66f1780ee1bc7396eeb5eb1b454f946254fe345879e8e0"
	I1017 19:08:37.614481   85117 cri.go:89] found id: "56048147246b1d30ce16d066a4bbb216f1f7c9b1459e21fa60ee108fdd3aa42a"
	I1017 19:08:37.614486   85117 cri.go:89] found id: "1b1f7dfe245a6d20e55f02381f27ec11e1eec3bf32b8112aaab88ea95c008e93"
	I1017 19:08:37.614490   85117 cri.go:89] found id: "b4db2cb7b47399fb64d0f31922d185a1ae009961ae05b56d9514db6f489a25eb"
	I1017 19:08:37.614498   85117 cri.go:89] found id: "d6eeaf9720fb0a5853cabba8afe0f0c64370fd422e21db9af2a9b6ce4b9aecc1"
	I1017 19:08:37.614513   85117 cri.go:89] found id: "26c7a235bb67e91ab6abf0c0282c65a526c3d2fc628ec6008956402a02d5b1e8"
	I1017 19:08:37.614519   85117 cri.go:89] found id: "171e623260fdb36d39493caf1c0b8c10efb097287233e2565304b12ece716a85"
	I1017 19:08:37.614521   85117 cri.go:89] found id: "0fe4cc88e7a7a757f4debf7f3ff8f76bef81d0a36e83bd994df86baa42f47a71"
	I1017 19:08:37.614530   85117 cri.go:89] found id: "86dba9687f70280ffaa952d354e90ec1a4ff74d73869d9360c56690901ad9461"
	I1017 19:08:37.614535   85117 cri.go:89] found id: "4d4ae675fa012cf6e18dd10516f8c83d32b364f3e27d8068722234a797bc7b1a"
	I1017 19:08:37.614538   85117 cri.go:89] found id: ""
	I1017 19:08:37.614600   85117 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-016863 -n functional-016863
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-016863 -n functional-016863: exit status 2 (237.426541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-016863" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (396.61s)

                                                
                                    
x
+
TestFunctional/parallel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel
functional_test.go:184: Unable to run more tests (deadline exceeded)
--- FAIL: TestFunctional/parallel (0.00s)

                                                
                                    
x
+
TestPreload (129.11s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-567736 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-567736 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0: (1m8.225545745s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-567736 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-567736 image pull gcr.io/k8s-minikube/busybox: (2.480352282s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-567736
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-567736: (6.921215812s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-567736 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1017 20:28:47.742003   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-567736 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (48.339487525s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-567736 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-10-17 20:29:06.774173191 +0000 UTC m=+5574.031279115
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-567736 -n test-preload-567736
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-567736 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-567736 logs -n 25: (1.190668055s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                        ARGS                                                                                         │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-048707 ssh -n multinode-048707-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-048707     │ jenkins │ v1.37.0 │ 17 Oct 25 20:15 UTC │ 17 Oct 25 20:15 UTC │
	│ ssh     │ multinode-048707 ssh -n multinode-048707 sudo cat /home/docker/cp-test_multinode-048707-m03_multinode-048707.txt                                                                    │ multinode-048707     │ jenkins │ v1.37.0 │ 17 Oct 25 20:15 UTC │ 17 Oct 25 20:15 UTC │
	│ cp      │ multinode-048707 cp multinode-048707-m03:/home/docker/cp-test.txt multinode-048707-m02:/home/docker/cp-test_multinode-048707-m03_multinode-048707-m02.txt                           │ multinode-048707     │ jenkins │ v1.37.0 │ 17 Oct 25 20:15 UTC │ 17 Oct 25 20:15 UTC │
	│ ssh     │ multinode-048707 ssh -n multinode-048707-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-048707     │ jenkins │ v1.37.0 │ 17 Oct 25 20:15 UTC │ 17 Oct 25 20:15 UTC │
	│ ssh     │ multinode-048707 ssh -n multinode-048707-m02 sudo cat /home/docker/cp-test_multinode-048707-m03_multinode-048707-m02.txt                                                            │ multinode-048707     │ jenkins │ v1.37.0 │ 17 Oct 25 20:15 UTC │ 17 Oct 25 20:15 UTC │
	│ node    │ multinode-048707 node stop m03                                                                                                                                                      │ multinode-048707     │ jenkins │ v1.37.0 │ 17 Oct 25 20:15 UTC │ 17 Oct 25 20:15 UTC │
	│ node    │ multinode-048707 node start m03 -v=5 --alsologtostderr                                                                                                                              │ multinode-048707     │ jenkins │ v1.37.0 │ 17 Oct 25 20:16 UTC │ 17 Oct 25 20:16 UTC │
	│ node    │ list -p multinode-048707                                                                                                                                                            │ multinode-048707     │ jenkins │ v1.37.0 │ 17 Oct 25 20:16 UTC │                     │
	│ stop    │ -p multinode-048707                                                                                                                                                                 │ multinode-048707     │ jenkins │ v1.37.0 │ 17 Oct 25 20:16 UTC │ 17 Oct 25 20:19 UTC │
	│ start   │ -p multinode-048707 --wait=true -v=5 --alsologtostderr                                                                                                                              │ multinode-048707     │ jenkins │ v1.37.0 │ 17 Oct 25 20:19 UTC │ 17 Oct 25 20:22 UTC │
	│ node    │ list -p multinode-048707                                                                                                                                                            │ multinode-048707     │ jenkins │ v1.37.0 │ 17 Oct 25 20:22 UTC │                     │
	│ node    │ multinode-048707 node delete m03                                                                                                                                                    │ multinode-048707     │ jenkins │ v1.37.0 │ 17 Oct 25 20:22 UTC │ 17 Oct 25 20:22 UTC │
	│ stop    │ multinode-048707 stop                                                                                                                                                               │ multinode-048707     │ jenkins │ v1.37.0 │ 17 Oct 25 20:22 UTC │ 17 Oct 25 20:24 UTC │
	│ start   │ -p multinode-048707 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                          │ multinode-048707     │ jenkins │ v1.37.0 │ 17 Oct 25 20:24 UTC │ 17 Oct 25 20:26 UTC │
	│ node    │ list -p multinode-048707                                                                                                                                                            │ multinode-048707     │ jenkins │ v1.37.0 │ 17 Oct 25 20:26 UTC │                     │
	│ start   │ -p multinode-048707-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-048707-m02 │ jenkins │ v1.37.0 │ 17 Oct 25 20:26 UTC │                     │
	│ start   │ -p multinode-048707-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-048707-m03 │ jenkins │ v1.37.0 │ 17 Oct 25 20:26 UTC │ 17 Oct 25 20:26 UTC │
	│ node    │ add -p multinode-048707                                                                                                                                                             │ multinode-048707     │ jenkins │ v1.37.0 │ 17 Oct 25 20:26 UTC │                     │
	│ delete  │ -p multinode-048707-m03                                                                                                                                                             │ multinode-048707-m03 │ jenkins │ v1.37.0 │ 17 Oct 25 20:26 UTC │ 17 Oct 25 20:26 UTC │
	│ delete  │ -p multinode-048707                                                                                                                                                                 │ multinode-048707     │ jenkins │ v1.37.0 │ 17 Oct 25 20:26 UTC │ 17 Oct 25 20:27 UTC │
	│ start   │ -p test-preload-567736 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0 │ test-preload-567736  │ jenkins │ v1.37.0 │ 17 Oct 25 20:27 UTC │ 17 Oct 25 20:28 UTC │
	│ image   │ test-preload-567736 image pull gcr.io/k8s-minikube/busybox                                                                                                                          │ test-preload-567736  │ jenkins │ v1.37.0 │ 17 Oct 25 20:28 UTC │ 17 Oct 25 20:28 UTC │
	│ stop    │ -p test-preload-567736                                                                                                                                                              │ test-preload-567736  │ jenkins │ v1.37.0 │ 17 Oct 25 20:28 UTC │ 17 Oct 25 20:28 UTC │
	│ start   │ -p test-preload-567736 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                         │ test-preload-567736  │ jenkins │ v1.37.0 │ 17 Oct 25 20:28 UTC │ 17 Oct 25 20:29 UTC │
	│ image   │ test-preload-567736 image list                                                                                                                                                      │ test-preload-567736  │ jenkins │ v1.37.0 │ 17 Oct 25 20:29 UTC │ 17 Oct 25 20:29 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:28:18
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:28:18.248545  116015 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:28:18.248656  116015 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:28:18.248664  116015 out.go:374] Setting ErrFile to fd 2...
	I1017 20:28:18.248669  116015 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:28:18.248867  116015 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-75534/.minikube/bin
	I1017 20:28:18.249354  116015 out.go:368] Setting JSON to false
	I1017 20:28:18.250163  116015 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":11449,"bootTime":1760721449,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 20:28:18.250277  116015 start.go:141] virtualization: kvm guest
	I1017 20:28:18.252288  116015 out.go:179] * [test-preload-567736] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 20:28:18.253352  116015 notify.go:220] Checking for updates...
	I1017 20:28:18.253366  116015 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 20:28:18.254509  116015 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:28:18.255704  116015 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-75534/kubeconfig
	I1017 20:28:18.256908  116015 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-75534/.minikube
	I1017 20:28:18.257993  116015 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 20:28:18.258952  116015 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:28:18.260425  116015 config.go:182] Loaded profile config "test-preload-567736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1017 20:28:18.261078  116015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 20:28:18.261150  116015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 20:28:18.276119  116015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39183
	I1017 20:28:18.276641  116015 main.go:141] libmachine: () Calling .GetVersion
	I1017 20:28:18.277302  116015 main.go:141] libmachine: Using API Version  1
	I1017 20:28:18.277329  116015 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 20:28:18.277767  116015 main.go:141] libmachine: () Calling .GetMachineName
	I1017 20:28:18.277992  116015 main.go:141] libmachine: (test-preload-567736) Calling .DriverName
	I1017 20:28:18.279655  116015 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1017 20:28:18.280856  116015 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:28:18.281299  116015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 20:28:18.281352  116015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 20:28:18.294428  116015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44887
	I1017 20:28:18.295037  116015 main.go:141] libmachine: () Calling .GetVersion
	I1017 20:28:18.295586  116015 main.go:141] libmachine: Using API Version  1
	I1017 20:28:18.295613  116015 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 20:28:18.295994  116015 main.go:141] libmachine: () Calling .GetMachineName
	I1017 20:28:18.296166  116015 main.go:141] libmachine: (test-preload-567736) Calling .DriverName
	I1017 20:28:18.329654  116015 out.go:179] * Using the kvm2 driver based on existing profile
	I1017 20:28:18.330740  116015 start.go:305] selected driver: kvm2
	I1017 20:28:18.330759  116015 start.go:925] validating driver "kvm2" against &{Name:test-preload-567736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-567736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:28:18.330859  116015 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:28:18.331593  116015 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:28:18.331684  116015 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21753-75534/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1017 20:28:18.344953  116015 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1017 20:28:18.344981  116015 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21753-75534/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1017 20:28:18.358532  116015 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1017 20:28:18.358942  116015 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:28:18.358994  116015 cni.go:84] Creating CNI manager for ""
	I1017 20:28:18.359072  116015 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1017 20:28:18.359127  116015 start.go:349] cluster config:
	{Name:test-preload-567736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-567736 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:28:18.359238  116015 iso.go:125] acquiring lock: {Name:mk89d24a0bd9a0a8cf0564a4affa55e11eaff101 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:28:18.361058  116015 out.go:179] * Starting "test-preload-567736" primary control-plane node in "test-preload-567736" cluster
	I1017 20:28:18.362385  116015 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1017 20:28:18.383339  116015 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1017 20:28:18.383372  116015 cache.go:58] Caching tarball of preloaded images
	I1017 20:28:18.383573  116015 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1017 20:28:18.385148  116015 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1017 20:28:18.386190  116015 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1017 20:28:18.414132  116015 preload.go:290] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1017 20:28:18.414189  116015 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21753-75534/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1017 20:28:21.535256  116015 cache.go:61] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1017 20:28:21.535405  116015 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/test-preload-567736/config.json ...
	I1017 20:28:21.535663  116015 start.go:360] acquireMachinesLock for test-preload-567736: {Name:mke0c3abe726945d0c60793aa0bf26eb33df7fed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1017 20:28:21.535734  116015 start.go:364] duration metric: took 46.797µs to acquireMachinesLock for "test-preload-567736"
	I1017 20:28:21.535750  116015 start.go:96] Skipping create...Using existing machine configuration
	I1017 20:28:21.535756  116015 fix.go:54] fixHost starting: 
	I1017 20:28:21.536009  116015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 20:28:21.536046  116015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 20:28:21.549475  116015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37229
	I1017 20:28:21.549926  116015 main.go:141] libmachine: () Calling .GetVersion
	I1017 20:28:21.550399  116015 main.go:141] libmachine: Using API Version  1
	I1017 20:28:21.550421  116015 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 20:28:21.550771  116015 main.go:141] libmachine: () Calling .GetMachineName
	I1017 20:28:21.551004  116015 main.go:141] libmachine: (test-preload-567736) Calling .DriverName
	I1017 20:28:21.551168  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetState
	I1017 20:28:21.553040  116015 fix.go:112] recreateIfNeeded on test-preload-567736: state=Stopped err=<nil>
	I1017 20:28:21.553067  116015 main.go:141] libmachine: (test-preload-567736) Calling .DriverName
	W1017 20:28:21.553228  116015 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 20:28:21.556123  116015 out.go:252] * Restarting existing kvm2 VM for "test-preload-567736" ...
	I1017 20:28:21.556157  116015 main.go:141] libmachine: (test-preload-567736) Calling .Start
	I1017 20:28:21.556336  116015 main.go:141] libmachine: (test-preload-567736) starting domain...
	I1017 20:28:21.556354  116015 main.go:141] libmachine: (test-preload-567736) ensuring networks are active...
	I1017 20:28:21.557232  116015 main.go:141] libmachine: (test-preload-567736) Ensuring network default is active
	I1017 20:28:21.557651  116015 main.go:141] libmachine: (test-preload-567736) Ensuring network mk-test-preload-567736 is active
	I1017 20:28:21.558116  116015 main.go:141] libmachine: (test-preload-567736) getting domain XML...
	I1017 20:28:21.559456  116015 main.go:141] libmachine: (test-preload-567736) DBG | starting domain XML:
	I1017 20:28:21.559479  116015 main.go:141] libmachine: (test-preload-567736) DBG | <domain type='kvm'>
	I1017 20:28:21.559486  116015 main.go:141] libmachine: (test-preload-567736) DBG |   <name>test-preload-567736</name>
	I1017 20:28:21.559496  116015 main.go:141] libmachine: (test-preload-567736) DBG |   <uuid>5dc37ba8-7af1-4a69-a21c-c9e35cdeba2e</uuid>
	I1017 20:28:21.559503  116015 main.go:141] libmachine: (test-preload-567736) DBG |   <memory unit='KiB'>3145728</memory>
	I1017 20:28:21.559512  116015 main.go:141] libmachine: (test-preload-567736) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1017 20:28:21.559519  116015 main.go:141] libmachine: (test-preload-567736) DBG |   <vcpu placement='static'>2</vcpu>
	I1017 20:28:21.559536  116015 main.go:141] libmachine: (test-preload-567736) DBG |   <os>
	I1017 20:28:21.559544  116015 main.go:141] libmachine: (test-preload-567736) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1017 20:28:21.559563  116015 main.go:141] libmachine: (test-preload-567736) DBG |     <boot dev='cdrom'/>
	I1017 20:28:21.559605  116015 main.go:141] libmachine: (test-preload-567736) DBG |     <boot dev='hd'/>
	I1017 20:28:21.559636  116015 main.go:141] libmachine: (test-preload-567736) DBG |     <bootmenu enable='no'/>
	I1017 20:28:21.559646  116015 main.go:141] libmachine: (test-preload-567736) DBG |   </os>
	I1017 20:28:21.559655  116015 main.go:141] libmachine: (test-preload-567736) DBG |   <features>
	I1017 20:28:21.559661  116015 main.go:141] libmachine: (test-preload-567736) DBG |     <acpi/>
	I1017 20:28:21.559665  116015 main.go:141] libmachine: (test-preload-567736) DBG |     <apic/>
	I1017 20:28:21.559670  116015 main.go:141] libmachine: (test-preload-567736) DBG |     <pae/>
	I1017 20:28:21.559674  116015 main.go:141] libmachine: (test-preload-567736) DBG |   </features>
	I1017 20:28:21.559681  116015 main.go:141] libmachine: (test-preload-567736) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1017 20:28:21.559685  116015 main.go:141] libmachine: (test-preload-567736) DBG |   <clock offset='utc'/>
	I1017 20:28:21.559691  116015 main.go:141] libmachine: (test-preload-567736) DBG |   <on_poweroff>destroy</on_poweroff>
	I1017 20:28:21.559696  116015 main.go:141] libmachine: (test-preload-567736) DBG |   <on_reboot>restart</on_reboot>
	I1017 20:28:21.559701  116015 main.go:141] libmachine: (test-preload-567736) DBG |   <on_crash>destroy</on_crash>
	I1017 20:28:21.559707  116015 main.go:141] libmachine: (test-preload-567736) DBG |   <devices>
	I1017 20:28:21.559714  116015 main.go:141] libmachine: (test-preload-567736) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1017 20:28:21.559718  116015 main.go:141] libmachine: (test-preload-567736) DBG |     <disk type='file' device='cdrom'>
	I1017 20:28:21.559725  116015 main.go:141] libmachine: (test-preload-567736) DBG |       <driver name='qemu' type='raw'/>
	I1017 20:28:21.559738  116015 main.go:141] libmachine: (test-preload-567736) DBG |       <source file='/home/jenkins/minikube-integration/21753-75534/.minikube/machines/test-preload-567736/boot2docker.iso'/>
	I1017 20:28:21.559747  116015 main.go:141] libmachine: (test-preload-567736) DBG |       <target dev='hdc' bus='scsi'/>
	I1017 20:28:21.559760  116015 main.go:141] libmachine: (test-preload-567736) DBG |       <readonly/>
	I1017 20:28:21.559771  116015 main.go:141] libmachine: (test-preload-567736) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1017 20:28:21.559781  116015 main.go:141] libmachine: (test-preload-567736) DBG |     </disk>
	I1017 20:28:21.559787  116015 main.go:141] libmachine: (test-preload-567736) DBG |     <disk type='file' device='disk'>
	I1017 20:28:21.559793  116015 main.go:141] libmachine: (test-preload-567736) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1017 20:28:21.559804  116015 main.go:141] libmachine: (test-preload-567736) DBG |       <source file='/home/jenkins/minikube-integration/21753-75534/.minikube/machines/test-preload-567736/test-preload-567736.rawdisk'/>
	I1017 20:28:21.559808  116015 main.go:141] libmachine: (test-preload-567736) DBG |       <target dev='hda' bus='virtio'/>
	I1017 20:28:21.559815  116015 main.go:141] libmachine: (test-preload-567736) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1017 20:28:21.559822  116015 main.go:141] libmachine: (test-preload-567736) DBG |     </disk>
	I1017 20:28:21.559833  116015 main.go:141] libmachine: (test-preload-567736) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1017 20:28:21.559851  116015 main.go:141] libmachine: (test-preload-567736) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1017 20:28:21.559861  116015 main.go:141] libmachine: (test-preload-567736) DBG |     </controller>
	I1017 20:28:21.559870  116015 main.go:141] libmachine: (test-preload-567736) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1017 20:28:21.559881  116015 main.go:141] libmachine: (test-preload-567736) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1017 20:28:21.559895  116015 main.go:141] libmachine: (test-preload-567736) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1017 20:28:21.559923  116015 main.go:141] libmachine: (test-preload-567736) DBG |     </controller>
	I1017 20:28:21.559939  116015 main.go:141] libmachine: (test-preload-567736) DBG |     <interface type='network'>
	I1017 20:28:21.559955  116015 main.go:141] libmachine: (test-preload-567736) DBG |       <mac address='52:54:00:ec:dd:7e'/>
	I1017 20:28:21.559970  116015 main.go:141] libmachine: (test-preload-567736) DBG |       <source network='mk-test-preload-567736'/>
	I1017 20:28:21.559982  116015 main.go:141] libmachine: (test-preload-567736) DBG |       <model type='virtio'/>
	I1017 20:28:21.559993  116015 main.go:141] libmachine: (test-preload-567736) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1017 20:28:21.560002  116015 main.go:141] libmachine: (test-preload-567736) DBG |     </interface>
	I1017 20:28:21.560012  116015 main.go:141] libmachine: (test-preload-567736) DBG |     <interface type='network'>
	I1017 20:28:21.560023  116015 main.go:141] libmachine: (test-preload-567736) DBG |       <mac address='52:54:00:05:8e:98'/>
	I1017 20:28:21.560036  116015 main.go:141] libmachine: (test-preload-567736) DBG |       <source network='default'/>
	I1017 20:28:21.560046  116015 main.go:141] libmachine: (test-preload-567736) DBG |       <model type='virtio'/>
	I1017 20:28:21.560081  116015 main.go:141] libmachine: (test-preload-567736) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1017 20:28:21.560103  116015 main.go:141] libmachine: (test-preload-567736) DBG |     </interface>
	I1017 20:28:21.560127  116015 main.go:141] libmachine: (test-preload-567736) DBG |     <serial type='pty'>
	I1017 20:28:21.560159  116015 main.go:141] libmachine: (test-preload-567736) DBG |       <target type='isa-serial' port='0'>
	I1017 20:28:21.560174  116015 main.go:141] libmachine: (test-preload-567736) DBG |         <model name='isa-serial'/>
	I1017 20:28:21.560185  116015 main.go:141] libmachine: (test-preload-567736) DBG |       </target>
	I1017 20:28:21.560194  116015 main.go:141] libmachine: (test-preload-567736) DBG |     </serial>
	I1017 20:28:21.560225  116015 main.go:141] libmachine: (test-preload-567736) DBG |     <console type='pty'>
	I1017 20:28:21.560237  116015 main.go:141] libmachine: (test-preload-567736) DBG |       <target type='serial' port='0'/>
	I1017 20:28:21.560246  116015 main.go:141] libmachine: (test-preload-567736) DBG |     </console>
	I1017 20:28:21.560261  116015 main.go:141] libmachine: (test-preload-567736) DBG |     <input type='mouse' bus='ps2'/>
	I1017 20:28:21.560278  116015 main.go:141] libmachine: (test-preload-567736) DBG |     <input type='keyboard' bus='ps2'/>
	I1017 20:28:21.560295  116015 main.go:141] libmachine: (test-preload-567736) DBG |     <audio id='1' type='none'/>
	I1017 20:28:21.560310  116015 main.go:141] libmachine: (test-preload-567736) DBG |     <memballoon model='virtio'>
	I1017 20:28:21.560323  116015 main.go:141] libmachine: (test-preload-567736) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1017 20:28:21.560336  116015 main.go:141] libmachine: (test-preload-567736) DBG |     </memballoon>
	I1017 20:28:21.560344  116015 main.go:141] libmachine: (test-preload-567736) DBG |     <rng model='virtio'>
	I1017 20:28:21.560357  116015 main.go:141] libmachine: (test-preload-567736) DBG |       <backend model='random'>/dev/random</backend>
	I1017 20:28:21.560388  116015 main.go:141] libmachine: (test-preload-567736) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1017 20:28:21.560399  116015 main.go:141] libmachine: (test-preload-567736) DBG |     </rng>
	I1017 20:28:21.560403  116015 main.go:141] libmachine: (test-preload-567736) DBG |   </devices>
	I1017 20:28:21.560409  116015 main.go:141] libmachine: (test-preload-567736) DBG | </domain>
	I1017 20:28:21.560419  116015 main.go:141] libmachine: (test-preload-567736) DBG | 
	I1017 20:28:22.803229  116015 main.go:141] libmachine: (test-preload-567736) waiting for domain to start...
	I1017 20:28:22.804704  116015 main.go:141] libmachine: (test-preload-567736) domain is now running
	I1017 20:28:22.804730  116015 main.go:141] libmachine: (test-preload-567736) waiting for IP...
	I1017 20:28:22.805773  116015 main.go:141] libmachine: (test-preload-567736) DBG | domain test-preload-567736 has defined MAC address 52:54:00:ec:dd:7e in network mk-test-preload-567736
	I1017 20:28:22.806367  116015 main.go:141] libmachine: (test-preload-567736) found domain IP: 192.168.39.81
	I1017 20:28:22.806388  116015 main.go:141] libmachine: (test-preload-567736) DBG | domain test-preload-567736 has current primary IP address 192.168.39.81 and MAC address 52:54:00:ec:dd:7e in network mk-test-preload-567736
	I1017 20:28:22.806394  116015 main.go:141] libmachine: (test-preload-567736) reserving static IP address...
	I1017 20:28:22.806857  116015 main.go:141] libmachine: (test-preload-567736) DBG | found host DHCP lease matching {name: "test-preload-567736", mac: "52:54:00:ec:dd:7e", ip: "192.168.39.81"} in network mk-test-preload-567736: {Iface:virbr1 ExpiryTime:2025-10-17 21:27:16 +0000 UTC Type:0 Mac:52:54:00:ec:dd:7e Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:test-preload-567736 Clientid:01:52:54:00:ec:dd:7e}
	I1017 20:28:22.806884  116015 main.go:141] libmachine: (test-preload-567736) reserved static IP address 192.168.39.81 for domain test-preload-567736
	I1017 20:28:22.806908  116015 main.go:141] libmachine: (test-preload-567736) DBG | skip adding static IP to network mk-test-preload-567736 - found existing host DHCP lease matching {name: "test-preload-567736", mac: "52:54:00:ec:dd:7e", ip: "192.168.39.81"}
	I1017 20:28:22.806925  116015 main.go:141] libmachine: (test-preload-567736) waiting for SSH...
	I1017 20:28:22.806940  116015 main.go:141] libmachine: (test-preload-567736) DBG | Getting to WaitForSSH function...
	I1017 20:28:22.809291  116015 main.go:141] libmachine: (test-preload-567736) DBG | domain test-preload-567736 has defined MAC address 52:54:00:ec:dd:7e in network mk-test-preload-567736
	I1017 20:28:22.809668  116015 main.go:141] libmachine: (test-preload-567736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:dd:7e", ip: ""} in network mk-test-preload-567736: {Iface:virbr1 ExpiryTime:2025-10-17 21:27:16 +0000 UTC Type:0 Mac:52:54:00:ec:dd:7e Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:test-preload-567736 Clientid:01:52:54:00:ec:dd:7e}
	I1017 20:28:22.809704  116015 main.go:141] libmachine: (test-preload-567736) DBG | domain test-preload-567736 has defined IP address 192.168.39.81 and MAC address 52:54:00:ec:dd:7e in network mk-test-preload-567736
	I1017 20:28:22.809889  116015 main.go:141] libmachine: (test-preload-567736) DBG | Using SSH client type: external
	I1017 20:28:22.809915  116015 main.go:141] libmachine: (test-preload-567736) DBG | Using SSH private key: /home/jenkins/minikube-integration/21753-75534/.minikube/machines/test-preload-567736/id_rsa (-rw-------)
	I1017 20:28:22.809946  116015 main.go:141] libmachine: (test-preload-567736) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.81 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21753-75534/.minikube/machines/test-preload-567736/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1017 20:28:22.809967  116015 main.go:141] libmachine: (test-preload-567736) DBG | About to run SSH command:
	I1017 20:28:22.809983  116015 main.go:141] libmachine: (test-preload-567736) DBG | exit 0
	I1017 20:28:34.095464  116015 main.go:141] libmachine: (test-preload-567736) DBG | SSH cmd err, output: exit status 255: 
	I1017 20:28:34.095486  116015 main.go:141] libmachine: (test-preload-567736) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1017 20:28:34.095518  116015 main.go:141] libmachine: (test-preload-567736) DBG | command : exit 0
	I1017 20:28:34.095527  116015 main.go:141] libmachine: (test-preload-567736) DBG | err     : exit status 255
	I1017 20:28:34.095540  116015 main.go:141] libmachine: (test-preload-567736) DBG | output  : 
	I1017 20:28:37.096296  116015 main.go:141] libmachine: (test-preload-567736) DBG | Getting to WaitForSSH function...
	I1017 20:28:37.099528  116015 main.go:141] libmachine: (test-preload-567736) DBG | domain test-preload-567736 has defined MAC address 52:54:00:ec:dd:7e in network mk-test-preload-567736
	I1017 20:28:37.099934  116015 main.go:141] libmachine: (test-preload-567736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:dd:7e", ip: ""} in network mk-test-preload-567736: {Iface:virbr1 ExpiryTime:2025-10-17 21:28:33 +0000 UTC Type:0 Mac:52:54:00:ec:dd:7e Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:test-preload-567736 Clientid:01:52:54:00:ec:dd:7e}
	I1017 20:28:37.099958  116015 main.go:141] libmachine: (test-preload-567736) DBG | domain test-preload-567736 has defined IP address 192.168.39.81 and MAC address 52:54:00:ec:dd:7e in network mk-test-preload-567736
	I1017 20:28:37.100193  116015 main.go:141] libmachine: (test-preload-567736) DBG | Using SSH client type: external
	I1017 20:28:37.100221  116015 main.go:141] libmachine: (test-preload-567736) DBG | Using SSH private key: /home/jenkins/minikube-integration/21753-75534/.minikube/machines/test-preload-567736/id_rsa (-rw-------)
	I1017 20:28:37.100249  116015 main.go:141] libmachine: (test-preload-567736) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.81 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21753-75534/.minikube/machines/test-preload-567736/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1017 20:28:37.100286  116015 main.go:141] libmachine: (test-preload-567736) DBG | About to run SSH command:
	I1017 20:28:37.100308  116015 main.go:141] libmachine: (test-preload-567736) DBG | exit 0
	I1017 20:28:37.235146  116015 main.go:141] libmachine: (test-preload-567736) DBG | SSH cmd err, output: <nil>: 
	I1017 20:28:37.235514  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetConfigRaw
	I1017 20:28:37.236214  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetIP
	I1017 20:28:37.239230  116015 main.go:141] libmachine: (test-preload-567736) DBG | domain test-preload-567736 has defined MAC address 52:54:00:ec:dd:7e in network mk-test-preload-567736
	I1017 20:28:37.239659  116015 main.go:141] libmachine: (test-preload-567736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:dd:7e", ip: ""} in network mk-test-preload-567736: {Iface:virbr1 ExpiryTime:2025-10-17 21:28:33 +0000 UTC Type:0 Mac:52:54:00:ec:dd:7e Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:test-preload-567736 Clientid:01:52:54:00:ec:dd:7e}
	I1017 20:28:37.239683  116015 main.go:141] libmachine: (test-preload-567736) DBG | domain test-preload-567736 has defined IP address 192.168.39.81 and MAC address 52:54:00:ec:dd:7e in network mk-test-preload-567736
	I1017 20:28:37.239947  116015 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/test-preload-567736/config.json ...
	I1017 20:28:37.240175  116015 machine.go:93] provisionDockerMachine start ...
	I1017 20:28:37.240199  116015 main.go:141] libmachine: (test-preload-567736) Calling .DriverName
	I1017 20:28:37.240473  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHHostname
	I1017 20:28:37.243451  116015 main.go:141] libmachine: (test-preload-567736) DBG | domain test-preload-567736 has defined MAC address 52:54:00:ec:dd:7e in network mk-test-preload-567736
	I1017 20:28:37.243863  116015 main.go:141] libmachine: (test-preload-567736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:dd:7e", ip: ""} in network mk-test-preload-567736: {Iface:virbr1 ExpiryTime:2025-10-17 21:28:33 +0000 UTC Type:0 Mac:52:54:00:ec:dd:7e Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:test-preload-567736 Clientid:01:52:54:00:ec:dd:7e}
	I1017 20:28:37.243894  116015 main.go:141] libmachine: (test-preload-567736) DBG | domain test-preload-567736 has defined IP address 192.168.39.81 and MAC address 52:54:00:ec:dd:7e in network mk-test-preload-567736
	I1017 20:28:37.244096  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHPort
	I1017 20:28:37.244288  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHKeyPath
	I1017 20:28:37.244431  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHKeyPath
	I1017 20:28:37.244681  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHUsername
	I1017 20:28:37.245022  116015 main.go:141] libmachine: Using SSH client type: native
	I1017 20:28:37.245290  116015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I1017 20:28:37.245304  116015 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:28:37.361914  116015 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1017 20:28:37.361954  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetMachineName
	I1017 20:28:37.362262  116015 buildroot.go:166] provisioning hostname "test-preload-567736"
	I1017 20:28:37.362297  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetMachineName
	I1017 20:28:37.362515  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHHostname
	I1017 20:28:37.365794  116015 main.go:141] libmachine: (test-preload-567736) DBG | domain test-preload-567736 has defined MAC address 52:54:00:ec:dd:7e in network mk-test-preload-567736
	I1017 20:28:37.366098  116015 main.go:141] libmachine: (test-preload-567736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:dd:7e", ip: ""} in network mk-test-preload-567736: {Iface:virbr1 ExpiryTime:2025-10-17 21:28:33 +0000 UTC Type:0 Mac:52:54:00:ec:dd:7e Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:test-preload-567736 Clientid:01:52:54:00:ec:dd:7e}
	I1017 20:28:37.366129  116015 main.go:141] libmachine: (test-preload-567736) DBG | domain test-preload-567736 has defined IP address 192.168.39.81 and MAC address 52:54:00:ec:dd:7e in network mk-test-preload-567736
	I1017 20:28:37.366344  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHPort
	I1017 20:28:37.366585  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHKeyPath
	I1017 20:28:37.366781  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHKeyPath
	I1017 20:28:37.366953  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHUsername
	I1017 20:28:37.367111  116015 main.go:141] libmachine: Using SSH client type: native
	I1017 20:28:37.367326  116015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I1017 20:28:37.367338  116015 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-567736 && echo "test-preload-567736" | sudo tee /etc/hostname
	I1017 20:28:37.503432  116015 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-567736
	
	I1017 20:28:37.503480  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHHostname
	I1017 20:28:37.506989  116015 main.go:141] libmachine: (test-preload-567736) DBG | domain test-preload-567736 has defined MAC address 52:54:00:ec:dd:7e in network mk-test-preload-567736
	I1017 20:28:37.507351  116015 main.go:141] libmachine: (test-preload-567736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:dd:7e", ip: ""} in network mk-test-preload-567736: {Iface:virbr1 ExpiryTime:2025-10-17 21:28:33 +0000 UTC Type:0 Mac:52:54:00:ec:dd:7e Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:test-preload-567736 Clientid:01:52:54:00:ec:dd:7e}
	I1017 20:28:37.507381  116015 main.go:141] libmachine: (test-preload-567736) DBG | domain test-preload-567736 has defined IP address 192.168.39.81 and MAC address 52:54:00:ec:dd:7e in network mk-test-preload-567736
	I1017 20:28:37.507630  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHPort
	I1017 20:28:37.507864  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHKeyPath
	I1017 20:28:37.508046  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHKeyPath
	I1017 20:28:37.508208  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHUsername
	I1017 20:28:37.508348  116015 main.go:141] libmachine: Using SSH client type: native
	I1017 20:28:37.508582  116015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I1017 20:28:37.508601  116015 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-567736' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-567736/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-567736' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:28:37.633241  116015 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:28:37.633282  116015 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21753-75534/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-75534/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-75534/.minikube}
	I1017 20:28:37.633352  116015 buildroot.go:174] setting up certificates
	I1017 20:28:37.633368  116015 provision.go:84] configureAuth start
	I1017 20:28:37.633384  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetMachineName
	I1017 20:28:37.633720  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetIP
	I1017 20:28:37.636851  116015 main.go:141] libmachine: (test-preload-567736) DBG | domain test-preload-567736 has defined MAC address 52:54:00:ec:dd:7e in network mk-test-preload-567736
	I1017 20:28:37.637319  116015 main.go:141] libmachine: (test-preload-567736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:dd:7e", ip: ""} in network mk-test-preload-567736: {Iface:virbr1 ExpiryTime:2025-10-17 21:28:33 +0000 UTC Type:0 Mac:52:54:00:ec:dd:7e Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:test-preload-567736 Clientid:01:52:54:00:ec:dd:7e}
	I1017 20:28:37.637342  116015 main.go:141] libmachine: (test-preload-567736) DBG | domain test-preload-567736 has defined IP address 192.168.39.81 and MAC address 52:54:00:ec:dd:7e in network mk-test-preload-567736
	I1017 20:28:37.637527  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHHostname
	I1017 20:28:37.640435  116015 main.go:141] libmachine: (test-preload-567736) DBG | domain test-preload-567736 has defined MAC address 52:54:00:ec:dd:7e in network mk-test-preload-567736
	I1017 20:28:37.640930  116015 main.go:141] libmachine: (test-preload-567736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:dd:7e", ip: ""} in network mk-test-preload-567736: {Iface:virbr1 ExpiryTime:2025-10-17 21:28:33 +0000 UTC Type:0 Mac:52:54:00:ec:dd:7e Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:test-preload-567736 Clientid:01:52:54:00:ec:dd:7e}
	I1017 20:28:37.640959  116015 main.go:141] libmachine: (test-preload-567736) DBG | domain test-preload-567736 has defined IP address 192.168.39.81 and MAC address 52:54:00:ec:dd:7e in network mk-test-preload-567736
	I1017 20:28:37.641175  116015 provision.go:143] copyHostCerts
	I1017 20:28:37.641238  116015 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-75534/.minikube/ca.pem, removing ...
	I1017 20:28:37.641256  116015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-75534/.minikube/ca.pem
	I1017 20:28:37.641326  116015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-75534/.minikube/ca.pem (1082 bytes)
	I1017 20:28:37.641435  116015 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-75534/.minikube/cert.pem, removing ...
	I1017 20:28:37.641444  116015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-75534/.minikube/cert.pem
	I1017 20:28:37.641472  116015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-75534/.minikube/cert.pem (1123 bytes)
	I1017 20:28:37.641541  116015 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-75534/.minikube/key.pem, removing ...
	I1017 20:28:37.641546  116015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-75534/.minikube/key.pem
	I1017 20:28:37.641611  116015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-75534/.minikube/key.pem (1679 bytes)
	I1017 20:28:37.641708  116015 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-75534/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca-key.pem org=jenkins.test-preload-567736 san=[127.0.0.1 192.168.39.81 localhost minikube test-preload-567736]
	I1017 20:28:37.920195  116015 provision.go:177] copyRemoteCerts
	I1017 20:28:37.920266  116015 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:28:37.920294  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHHostname
	I1017 20:28:37.923472  116015 main.go:141] libmachine: (test-preload-567736) DBG | domain test-preload-567736 has defined MAC address 52:54:00:ec:dd:7e in network mk-test-preload-567736
	I1017 20:28:37.923848  116015 main.go:141] libmachine: (test-preload-567736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:dd:7e", ip: ""} in network mk-test-preload-567736: {Iface:virbr1 ExpiryTime:2025-10-17 21:28:33 +0000 UTC Type:0 Mac:52:54:00:ec:dd:7e Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:test-preload-567736 Clientid:01:52:54:00:ec:dd:7e}
	I1017 20:28:37.923877  116015 main.go:141] libmachine: (test-preload-567736) DBG | domain test-preload-567736 has defined IP address 192.168.39.81 and MAC address 52:54:00:ec:dd:7e in network mk-test-preload-567736
	I1017 20:28:37.924098  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHPort
	I1017 20:28:37.924308  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHKeyPath
	I1017 20:28:37.924524  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHUsername
	I1017 20:28:37.924729  116015 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/test-preload-567736/id_rsa Username:docker}
	I1017 20:28:38.020059  116015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1017 20:28:38.056340  116015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 20:28:38.092427  116015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1017 20:28:38.127216  116015 provision.go:87] duration metric: took 493.82823ms to configureAuth
	I1017 20:28:38.127272  116015 buildroot.go:189] setting minikube options for container-runtime
	I1017 20:28:38.127476  116015 config.go:182] Loaded profile config "test-preload-567736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1017 20:28:38.127586  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHHostname
	I1017 20:28:38.130970  116015 main.go:141] libmachine: (test-preload-567736) DBG | domain test-preload-567736 has defined MAC address 52:54:00:ec:dd:7e in network mk-test-preload-567736
	I1017 20:28:38.131358  116015 main.go:141] libmachine: (test-preload-567736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:dd:7e", ip: ""} in network mk-test-preload-567736: {Iface:virbr1 ExpiryTime:2025-10-17 21:28:33 +0000 UTC Type:0 Mac:52:54:00:ec:dd:7e Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:test-preload-567736 Clientid:01:52:54:00:ec:dd:7e}
	I1017 20:28:38.131388  116015 main.go:141] libmachine: (test-preload-567736) DBG | domain test-preload-567736 has defined IP address 192.168.39.81 and MAC address 52:54:00:ec:dd:7e in network mk-test-preload-567736
	I1017 20:28:38.131658  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHPort
	I1017 20:28:38.131894  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHKeyPath
	I1017 20:28:38.132087  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHKeyPath
	I1017 20:28:38.132236  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHUsername
	I1017 20:28:38.132424  116015 main.go:141] libmachine: Using SSH client type: native
	I1017 20:28:38.132667  116015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I1017 20:28:38.132683  116015 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:28:38.388314  116015 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:28:38.388342  116015 machine.go:96] duration metric: took 1.148150936s to provisionDockerMachine
	I1017 20:28:38.388358  116015 start.go:293] postStartSetup for "test-preload-567736" (driver="kvm2")
	I1017 20:28:38.388371  116015 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:28:38.388395  116015 main.go:141] libmachine: (test-preload-567736) Calling .DriverName
	I1017 20:28:38.388782  116015 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:28:38.388815  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHHostname
	I1017 20:28:38.391891  116015 main.go:141] libmachine: (test-preload-567736) DBG | domain test-preload-567736 has defined MAC address 52:54:00:ec:dd:7e in network mk-test-preload-567736
	I1017 20:28:38.392342  116015 main.go:141] libmachine: (test-preload-567736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:dd:7e", ip: ""} in network mk-test-preload-567736: {Iface:virbr1 ExpiryTime:2025-10-17 21:28:33 +0000 UTC Type:0 Mac:52:54:00:ec:dd:7e Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:test-preload-567736 Clientid:01:52:54:00:ec:dd:7e}
	I1017 20:28:38.392370  116015 main.go:141] libmachine: (test-preload-567736) DBG | domain test-preload-567736 has defined IP address 192.168.39.81 and MAC address 52:54:00:ec:dd:7e in network mk-test-preload-567736
	I1017 20:28:38.392544  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHPort
	I1017 20:28:38.392754  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHKeyPath
	I1017 20:28:38.392941  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHUsername
	I1017 20:28:38.393088  116015 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/test-preload-567736/id_rsa Username:docker}
	I1017 20:28:38.483135  116015 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:28:38.488517  116015 info.go:137] Remote host: Buildroot 2025.02
	I1017 20:28:38.488547  116015 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-75534/.minikube/addons for local assets ...
	I1017 20:28:38.488665  116015 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-75534/.minikube/files for local assets ...
	I1017 20:28:38.488776  116015 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/ssl/certs/794392.pem -> 794392.pem in /etc/ssl/certs
	I1017 20:28:38.488909  116015 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:28:38.500861  116015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/ssl/certs/794392.pem --> /etc/ssl/certs/794392.pem (1708 bytes)
	I1017 20:28:38.531410  116015 start.go:296] duration metric: took 143.035356ms for postStartSetup
	I1017 20:28:38.531456  116015 fix.go:56] duration metric: took 16.995698957s for fixHost
	I1017 20:28:38.531482  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHHostname
	I1017 20:28:38.534509  116015 main.go:141] libmachine: (test-preload-567736) DBG | domain test-preload-567736 has defined MAC address 52:54:00:ec:dd:7e in network mk-test-preload-567736
	I1017 20:28:38.534972  116015 main.go:141] libmachine: (test-preload-567736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:dd:7e", ip: ""} in network mk-test-preload-567736: {Iface:virbr1 ExpiryTime:2025-10-17 21:28:33 +0000 UTC Type:0 Mac:52:54:00:ec:dd:7e Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:test-preload-567736 Clientid:01:52:54:00:ec:dd:7e}
	I1017 20:28:38.535001  116015 main.go:141] libmachine: (test-preload-567736) DBG | domain test-preload-567736 has defined IP address 192.168.39.81 and MAC address 52:54:00:ec:dd:7e in network mk-test-preload-567736
	I1017 20:28:38.535187  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHPort
	I1017 20:28:38.535407  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHKeyPath
	I1017 20:28:38.535642  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHKeyPath
	I1017 20:28:38.535794  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHUsername
	I1017 20:28:38.535938  116015 main.go:141] libmachine: Using SSH client type: native
	I1017 20:28:38.536172  116015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I1017 20:28:38.536185  116015 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1017 20:28:38.650655  116015 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760732918.614005374
	
	I1017 20:28:38.650680  116015 fix.go:216] guest clock: 1760732918.614005374
	I1017 20:28:38.650693  116015 fix.go:229] Guest: 2025-10-17 20:28:38.614005374 +0000 UTC Remote: 2025-10-17 20:28:38.531461528 +0000 UTC m=+20.319895778 (delta=82.543846ms)
	I1017 20:28:38.650762  116015 fix.go:200] guest clock delta is within tolerance: 82.543846ms
	I1017 20:28:38.650771  116015 start.go:83] releasing machines lock for "test-preload-567736", held for 17.115025622s
	I1017 20:28:38.650804  116015 main.go:141] libmachine: (test-preload-567736) Calling .DriverName
	I1017 20:28:38.651121  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetIP
	I1017 20:28:38.654205  116015 main.go:141] libmachine: (test-preload-567736) DBG | domain test-preload-567736 has defined MAC address 52:54:00:ec:dd:7e in network mk-test-preload-567736
	I1017 20:28:38.654643  116015 main.go:141] libmachine: (test-preload-567736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:dd:7e", ip: ""} in network mk-test-preload-567736: {Iface:virbr1 ExpiryTime:2025-10-17 21:28:33 +0000 UTC Type:0 Mac:52:54:00:ec:dd:7e Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:test-preload-567736 Clientid:01:52:54:00:ec:dd:7e}
	I1017 20:28:38.654670  116015 main.go:141] libmachine: (test-preload-567736) DBG | domain test-preload-567736 has defined IP address 192.168.39.81 and MAC address 52:54:00:ec:dd:7e in network mk-test-preload-567736
	I1017 20:28:38.654865  116015 main.go:141] libmachine: (test-preload-567736) Calling .DriverName
	I1017 20:28:38.655367  116015 main.go:141] libmachine: (test-preload-567736) Calling .DriverName
	I1017 20:28:38.655540  116015 main.go:141] libmachine: (test-preload-567736) Calling .DriverName
	I1017 20:28:38.655632  116015 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:28:38.655699  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHHostname
	I1017 20:28:38.655764  116015 ssh_runner.go:195] Run: cat /version.json
	I1017 20:28:38.655792  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHHostname
	I1017 20:28:38.658805  116015 main.go:141] libmachine: (test-preload-567736) DBG | domain test-preload-567736 has defined MAC address 52:54:00:ec:dd:7e in network mk-test-preload-567736
	I1017 20:28:38.658886  116015 main.go:141] libmachine: (test-preload-567736) DBG | domain test-preload-567736 has defined MAC address 52:54:00:ec:dd:7e in network mk-test-preload-567736
	I1017 20:28:38.659217  116015 main.go:141] libmachine: (test-preload-567736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:dd:7e", ip: ""} in network mk-test-preload-567736: {Iface:virbr1 ExpiryTime:2025-10-17 21:28:33 +0000 UTC Type:0 Mac:52:54:00:ec:dd:7e Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:test-preload-567736 Clientid:01:52:54:00:ec:dd:7e}
	I1017 20:28:38.659252  116015 main.go:141] libmachine: (test-preload-567736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:dd:7e", ip: ""} in network mk-test-preload-567736: {Iface:virbr1 ExpiryTime:2025-10-17 21:28:33 +0000 UTC Type:0 Mac:52:54:00:ec:dd:7e Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:test-preload-567736 Clientid:01:52:54:00:ec:dd:7e}
	I1017 20:28:38.659278  116015 main.go:141] libmachine: (test-preload-567736) DBG | domain test-preload-567736 has defined IP address 192.168.39.81 and MAC address 52:54:00:ec:dd:7e in network mk-test-preload-567736
	I1017 20:28:38.659301  116015 main.go:141] libmachine: (test-preload-567736) DBG | domain test-preload-567736 has defined IP address 192.168.39.81 and MAC address 52:54:00:ec:dd:7e in network mk-test-preload-567736
	I1017 20:28:38.659477  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHPort
	I1017 20:28:38.659656  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHPort
	I1017 20:28:38.659740  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHKeyPath
	I1017 20:28:38.659842  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHKeyPath
	I1017 20:28:38.659887  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHUsername
	I1017 20:28:38.660033  116015 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/test-preload-567736/id_rsa Username:docker}
	I1017 20:28:38.660062  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHUsername
	I1017 20:28:38.660233  116015 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/test-preload-567736/id_rsa Username:docker}
	I1017 20:28:38.743131  116015 ssh_runner.go:195] Run: systemctl --version
	I1017 20:28:38.771540  116015 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:28:38.921545  116015 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:28:38.930123  116015 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:28:38.930301  116015 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:28:38.953123  116015 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1017 20:28:38.953149  116015 start.go:495] detecting cgroup driver to use...
	I1017 20:28:38.953215  116015 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:28:38.972616  116015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:28:38.989117  116015 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:28:38.989200  116015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:28:39.007508  116015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:28:39.024371  116015 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:28:39.177954  116015 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:28:39.418630  116015 docker.go:234] disabling docker service ...
	I1017 20:28:39.418700  116015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:28:39.436599  116015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:28:39.452895  116015 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:28:39.612960  116015 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:28:39.767121  116015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:28:39.783722  116015 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:28:39.807757  116015 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1017 20:28:39.807824  116015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:28:39.820763  116015 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 20:28:39.820849  116015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:28:39.834027  116015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:28:39.850540  116015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:28:39.863776  116015 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:28:39.878805  116015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:28:39.892846  116015 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:28:39.914374  116015 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:28:39.928304  116015 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:28:39.940450  116015 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1017 20:28:39.940528  116015 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1017 20:28:39.963444  116015 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:28:39.976514  116015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:28:40.123064  116015 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:28:40.243251  116015 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:28:40.243323  116015 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:28:40.249507  116015 start.go:563] Will wait 60s for crictl version
	I1017 20:28:40.249600  116015 ssh_runner.go:195] Run: which crictl
	I1017 20:28:40.254447  116015 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1017 20:28:40.300122  116015 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1017 20:28:40.300224  116015 ssh_runner.go:195] Run: crio --version
	I1017 20:28:40.332619  116015 ssh_runner.go:195] Run: crio --version
	I1017 20:28:40.364150  116015 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1017 20:28:40.365423  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetIP
	I1017 20:28:40.368871  116015 main.go:141] libmachine: (test-preload-567736) DBG | domain test-preload-567736 has defined MAC address 52:54:00:ec:dd:7e in network mk-test-preload-567736
	I1017 20:28:40.369441  116015 main.go:141] libmachine: (test-preload-567736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:dd:7e", ip: ""} in network mk-test-preload-567736: {Iface:virbr1 ExpiryTime:2025-10-17 21:28:33 +0000 UTC Type:0 Mac:52:54:00:ec:dd:7e Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:test-preload-567736 Clientid:01:52:54:00:ec:dd:7e}
	I1017 20:28:40.369471  116015 main.go:141] libmachine: (test-preload-567736) DBG | domain test-preload-567736 has defined IP address 192.168.39.81 and MAC address 52:54:00:ec:dd:7e in network mk-test-preload-567736
	I1017 20:28:40.369786  116015 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1017 20:28:40.374753  116015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:28:40.390758  116015 kubeadm.go:883] updating cluster {Name:test-preload-567736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-567736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 20:28:40.390865  116015 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1017 20:28:40.390904  116015 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:28:40.431838  116015 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1017 20:28:40.431907  116015 ssh_runner.go:195] Run: which lz4
	I1017 20:28:40.436665  116015 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1017 20:28:40.441673  116015 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1017 20:28:40.441706  116015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1017 20:28:42.026431  116015 crio.go:462] duration metric: took 1.589798712s to copy over tarball
	I1017 20:28:42.026508  116015 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1017 20:28:43.726084  116015 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.699534168s)
	I1017 20:28:43.726115  116015 crio.go:469] duration metric: took 1.699650254s to extract the tarball
	I1017 20:28:43.726124  116015 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1017 20:28:43.768841  116015 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:28:43.813096  116015 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:28:43.813121  116015 cache_images.go:85] Images are preloaded, skipping loading
	I1017 20:28:43.813132  116015 kubeadm.go:934] updating node { 192.168.39.81 8443 v1.32.0 crio true true} ...
	I1017 20:28:43.813229  116015 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-567736 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.81
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-567736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:28:43.813298  116015 ssh_runner.go:195] Run: crio config
	I1017 20:28:43.864660  116015 cni.go:84] Creating CNI manager for ""
	I1017 20:28:43.864706  116015 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1017 20:28:43.864731  116015 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 20:28:43.864755  116015 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.81 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-567736 NodeName:test-preload-567736 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.81"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.81 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 20:28:43.864884  116015 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.81
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-567736"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.81"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.81"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 20:28:43.864966  116015 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1017 20:28:43.877704  116015 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:28:43.877780  116015 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 20:28:43.890364  116015 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1017 20:28:43.912288  116015 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:28:43.934267  116015 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1017 20:28:43.956687  116015 ssh_runner.go:195] Run: grep 192.168.39.81	control-plane.minikube.internal$ /etc/hosts
	I1017 20:28:43.961241  116015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.81	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:28:43.976972  116015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:28:44.122122  116015 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:28:44.154355  116015 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/test-preload-567736 for IP: 192.168.39.81
	I1017 20:28:44.154391  116015 certs.go:195] generating shared ca certs ...
	I1017 20:28:44.154416  116015 certs.go:227] acquiring lock for ca certs: {Name:mka410ab7d3b92eaaa0d0545223807c0ba196baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:28:44.154644  116015 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-75534/.minikube/ca.key
	I1017 20:28:44.154712  116015 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-75534/.minikube/proxy-client-ca.key
	I1017 20:28:44.154729  116015 certs.go:257] generating profile certs ...
	I1017 20:28:44.154842  116015 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/test-preload-567736/client.key
	I1017 20:28:44.154943  116015 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/test-preload-567736/apiserver.key.dfe86e38
	I1017 20:28:44.155000  116015 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/test-preload-567736/proxy-client.key
	I1017 20:28:44.155152  116015 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/79439.pem (1338 bytes)
	W1017 20:28:44.155198  116015 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-75534/.minikube/certs/79439_empty.pem, impossibly tiny 0 bytes
	I1017 20:28:44.155223  116015 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca-key.pem (1679 bytes)
	I1017 20:28:44.155272  116015 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/ca.pem (1082 bytes)
	I1017 20:28:44.155304  116015 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:28:44.155337  116015 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-75534/.minikube/certs/key.pem (1679 bytes)
	I1017 20:28:44.155420  116015 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/ssl/certs/794392.pem (1708 bytes)
	I1017 20:28:44.156226  116015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:28:44.198539  116015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 20:28:44.240771  116015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:28:44.273355  116015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 20:28:44.304889  116015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/test-preload-567736/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1017 20:28:44.335730  116015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/test-preload-567736/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 20:28:44.372492  116015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/test-preload-567736/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:28:44.407215  116015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/test-preload-567736/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 20:28:44.442513  116015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:28:44.476507  116015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/certs/79439.pem --> /usr/share/ca-certificates/79439.pem (1338 bytes)
	I1017 20:28:44.509605  116015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/ssl/certs/794392.pem --> /usr/share/ca-certificates/794392.pem (1708 bytes)
	I1017 20:28:44.546218  116015 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 20:28:44.570914  116015 ssh_runner.go:195] Run: openssl version
	I1017 20:28:44.578305  116015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:28:44.593235  116015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:28:44.599136  116015 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:28:44.599199  116015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:28:44.607753  116015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:28:44.622770  116015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/79439.pem && ln -fs /usr/share/ca-certificates/79439.pem /etc/ssl/certs/79439.pem"
	I1017 20:28:44.637400  116015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/79439.pem
	I1017 20:28:44.643481  116015 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:05 /usr/share/ca-certificates/79439.pem
	I1017 20:28:44.643568  116015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/79439.pem
	I1017 20:28:44.652099  116015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/79439.pem /etc/ssl/certs/51391683.0"
	I1017 20:28:44.667378  116015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/794392.pem && ln -fs /usr/share/ca-certificates/794392.pem /etc/ssl/certs/794392.pem"
	I1017 20:28:44.681940  116015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/794392.pem
	I1017 20:28:44.687986  116015 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:05 /usr/share/ca-certificates/794392.pem
	I1017 20:28:44.688058  116015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/794392.pem
	I1017 20:28:44.696494  116015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/794392.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:28:44.711435  116015 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:28:44.717605  116015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 20:28:44.725729  116015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 20:28:44.734576  116015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 20:28:44.743383  116015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 20:28:44.751657  116015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 20:28:44.759989  116015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 20:28:44.768731  116015 kubeadm.go:400] StartCluster: {Name:test-preload-567736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-567736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:28:44.768816  116015 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:28:44.768882  116015 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:28:44.815679  116015 cri.go:89] found id: ""
	I1017 20:28:44.815773  116015 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 20:28:44.829474  116015 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 20:28:44.829497  116015 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 20:28:44.829573  116015 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 20:28:44.845835  116015 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 20:28:44.846330  116015 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-567736" does not appear in /home/jenkins/minikube-integration/21753-75534/kubeconfig
	I1017 20:28:44.846471  116015 kubeconfig.go:62] /home/jenkins/minikube-integration/21753-75534/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-567736" cluster setting kubeconfig missing "test-preload-567736" context setting]
	I1017 20:28:44.846857  116015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-75534/kubeconfig: {Name:mkeb0035d9ef9d3dc893fc7f4a25aa46f7d51ce7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:28:44.847451  116015 kapi.go:59] client config for test-preload-567736: &rest.Config{Host:"https://192.168.39.81:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21753-75534/.minikube/profiles/test-preload-567736/client.crt", KeyFile:"/home/jenkins/minikube-integration/21753-75534/.minikube/profiles/test-preload-567736/client.key", CAFile:"/home/jenkins/minikube-integration/21753-75534/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(n
il), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819bc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1017 20:28:44.847890  116015 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1017 20:28:44.847905  116015 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1017 20:28:44.847909  116015 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1017 20:28:44.847913  116015 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1017 20:28:44.847916  116015 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1017 20:28:44.848319  116015 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 20:28:44.861750  116015 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.39.81
	I1017 20:28:44.861788  116015 kubeadm.go:1160] stopping kube-system containers ...
	I1017 20:28:44.861801  116015 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1017 20:28:44.861869  116015 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:28:44.915253  116015 cri.go:89] found id: ""
	I1017 20:28:44.915332  116015 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1017 20:28:44.945013  116015 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1017 20:28:44.958070  116015 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1017 20:28:44.958090  116015 kubeadm.go:157] found existing configuration files:
	
	I1017 20:28:44.958136  116015 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1017 20:28:44.970409  116015 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1017 20:28:44.970490  116015 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1017 20:28:44.983375  116015 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1017 20:28:44.994788  116015 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1017 20:28:44.994846  116015 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1017 20:28:45.007695  116015 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1017 20:28:45.019763  116015 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1017 20:28:45.019842  116015 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1017 20:28:45.032253  116015 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1017 20:28:45.043898  116015 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1017 20:28:45.043961  116015 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1017 20:28:45.057000  116015 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1017 20:28:45.070028  116015 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1017 20:28:45.136751  116015 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1017 20:28:46.155534  116015 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.018740947s)
	I1017 20:28:46.155642  116015 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1017 20:28:46.423415  116015 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1017 20:28:46.489051  116015 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1017 20:28:46.588891  116015 api_server.go:52] waiting for apiserver process to appear ...
	I1017 20:28:46.588992  116015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:28:47.089278  116015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:28:47.589466  116015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:28:48.089586  116015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:28:48.589165  116015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:28:49.090011  116015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:28:49.120708  116015 api_server.go:72] duration metric: took 2.531835039s to wait for apiserver process to appear ...
	I1017 20:28:49.120740  116015 api_server.go:88] waiting for apiserver healthz status ...
	I1017 20:28:49.120762  116015 api_server.go:253] Checking apiserver healthz at https://192.168.39.81:8443/healthz ...
	I1017 20:28:51.914680  116015 api_server.go:279] https://192.168.39.81:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1017 20:28:51.914708  116015 api_server.go:103] status: https://192.168.39.81:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1017 20:28:51.914723  116015 api_server.go:253] Checking apiserver healthz at https://192.168.39.81:8443/healthz ...
	I1017 20:28:51.962226  116015 api_server.go:279] https://192.168.39.81:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1017 20:28:51.962255  116015 api_server.go:103] status: https://192.168.39.81:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1017 20:28:52.121706  116015 api_server.go:253] Checking apiserver healthz at https://192.168.39.81:8443/healthz ...
	I1017 20:28:52.126503  116015 api_server.go:279] https://192.168.39.81:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1017 20:28:52.126528  116015 api_server.go:103] status: https://192.168.39.81:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1017 20:28:52.621155  116015 api_server.go:253] Checking apiserver healthz at https://192.168.39.81:8443/healthz ...
	I1017 20:28:52.629133  116015 api_server.go:279] https://192.168.39.81:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1017 20:28:52.629160  116015 api_server.go:103] status: https://192.168.39.81:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1017 20:28:53.121874  116015 api_server.go:253] Checking apiserver healthz at https://192.168.39.81:8443/healthz ...
	I1017 20:28:53.129984  116015 api_server.go:279] https://192.168.39.81:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1017 20:28:53.130014  116015 api_server.go:103] status: https://192.168.39.81:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1017 20:28:53.621016  116015 api_server.go:253] Checking apiserver healthz at https://192.168.39.81:8443/healthz ...
	I1017 20:28:53.625978  116015 api_server.go:279] https://192.168.39.81:8443/healthz returned 200:
	ok
	I1017 20:28:53.633679  116015 api_server.go:141] control plane version: v1.32.0
	I1017 20:28:53.633710  116015 api_server.go:131] duration metric: took 4.512964514s to wait for apiserver health ...
	I1017 20:28:53.633723  116015 cni.go:84] Creating CNI manager for ""
	I1017 20:28:53.633731  116015 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1017 20:28:53.635486  116015 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1017 20:28:53.636892  116015 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1017 20:28:53.651167  116015 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1017 20:28:53.673761  116015 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 20:28:53.680491  116015 system_pods.go:59] 7 kube-system pods found
	I1017 20:28:53.680542  116015 system_pods.go:61] "coredns-668d6bf9bc-qktj2" [613e0d53-4928-49e0-8ff5-2a9195622ac6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:28:53.680570  116015 system_pods.go:61] "etcd-test-preload-567736" [5c6477c5-abee-451c-8801-4aeacd42221e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 20:28:53.680582  116015 system_pods.go:61] "kube-apiserver-test-preload-567736" [a2d9a438-f8d7-4e46-8e65-fce4bcef0436] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 20:28:53.680597  116015 system_pods.go:61] "kube-controller-manager-test-preload-567736" [97ae8710-006b-4101-b896-661b821409e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 20:28:53.680605  116015 system_pods.go:61] "kube-proxy-glnnt" [cf250f50-85ce-4e8a-a3b7-2cbde28ed3aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 20:28:53.680615  116015 system_pods.go:61] "kube-scheduler-test-preload-567736" [fe8e8d0f-98ac-4334-81b8-6db347311ff3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 20:28:53.680622  116015 system_pods.go:61] "storage-provisioner" [1cba2c5a-38dd-432f-bed0-8ad333b6e196] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 20:28:53.680633  116015 system_pods.go:74] duration metric: took 6.845119ms to wait for pod list to return data ...
	I1017 20:28:53.680645  116015 node_conditions.go:102] verifying NodePressure condition ...
	I1017 20:28:53.684177  116015 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1017 20:28:53.684211  116015 node_conditions.go:123] node cpu capacity is 2
	I1017 20:28:53.684229  116015 node_conditions.go:105] duration metric: took 3.577659ms to run NodePressure ...
	I1017 20:28:53.684300  116015 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1017 20:28:53.964780  116015 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1017 20:28:53.970387  116015 kubeadm.go:743] kubelet initialised
	I1017 20:28:53.970415  116015 kubeadm.go:744] duration metric: took 5.60093ms waiting for restarted kubelet to initialise ...
	I1017 20:28:53.970438  116015 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1017 20:28:53.989353  116015 ops.go:34] apiserver oom_adj: -16
	I1017 20:28:53.989380  116015 kubeadm.go:601] duration metric: took 9.159876353s to restartPrimaryControlPlane
	I1017 20:28:53.989394  116015 kubeadm.go:402] duration metric: took 9.220671476s to StartCluster
	I1017 20:28:53.989416  116015 settings.go:142] acquiring lock: {Name:mkda33fafc6cb583284a8333cb60efdc2a47f894 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:28:53.989509  116015 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-75534/kubeconfig
	I1017 20:28:53.990098  116015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-75534/kubeconfig: {Name:mkeb0035d9ef9d3dc893fc7f4a25aa46f7d51ce7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:28:53.990387  116015 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:28:53.990446  116015 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 20:28:53.990562  116015 addons.go:69] Setting storage-provisioner=true in profile "test-preload-567736"
	I1017 20:28:53.990584  116015 addons.go:238] Setting addon storage-provisioner=true in "test-preload-567736"
	W1017 20:28:53.990594  116015 addons.go:247] addon storage-provisioner should already be in state true
	I1017 20:28:53.990597  116015 addons.go:69] Setting default-storageclass=true in profile "test-preload-567736"
	I1017 20:28:53.990621  116015 host.go:66] Checking if "test-preload-567736" exists ...
	I1017 20:28:53.990619  116015 config.go:182] Loaded profile config "test-preload-567736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1017 20:28:53.990628  116015 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-567736"
	I1017 20:28:53.990975  116015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 20:28:53.991026  116015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 20:28:53.991065  116015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 20:28:53.991030  116015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 20:28:53.993152  116015 out.go:179] * Verifying Kubernetes components...
	I1017 20:28:53.994533  116015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:28:54.005079  116015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34013
	I1017 20:28:54.005150  116015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41025
	I1017 20:28:54.005605  116015 main.go:141] libmachine: () Calling .GetVersion
	I1017 20:28:54.005628  116015 main.go:141] libmachine: () Calling .GetVersion
	I1017 20:28:54.006081  116015 main.go:141] libmachine: Using API Version  1
	I1017 20:28:54.006102  116015 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 20:28:54.006080  116015 main.go:141] libmachine: Using API Version  1
	I1017 20:28:54.006118  116015 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 20:28:54.006442  116015 main.go:141] libmachine: () Calling .GetMachineName
	I1017 20:28:54.006491  116015 main.go:141] libmachine: () Calling .GetMachineName
	I1017 20:28:54.006717  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetState
	I1017 20:28:54.006984  116015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 20:28:54.007012  116015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 20:28:54.008929  116015 kapi.go:59] client config for test-preload-567736: &rest.Config{Host:"https://192.168.39.81:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21753-75534/.minikube/profiles/test-preload-567736/client.crt", KeyFile:"/home/jenkins/minikube-integration/21753-75534/.minikube/profiles/test-preload-567736/client.key", CAFile:"/home/jenkins/minikube-integration/21753-75534/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(n
il), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819bc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1017 20:28:54.009176  116015 addons.go:238] Setting addon default-storageclass=true in "test-preload-567736"
	W1017 20:28:54.009189  116015 addons.go:247] addon default-storageclass should already be in state true
	I1017 20:28:54.009211  116015 host.go:66] Checking if "test-preload-567736" exists ...
	I1017 20:28:54.009479  116015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 20:28:54.009506  116015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 20:28:54.021351  116015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43113
	I1017 20:28:54.021946  116015 main.go:141] libmachine: () Calling .GetVersion
	I1017 20:28:54.022580  116015 main.go:141] libmachine: Using API Version  1
	I1017 20:28:54.022611  116015 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 20:28:54.022992  116015 main.go:141] libmachine: () Calling .GetMachineName
	I1017 20:28:54.023053  116015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34251
	I1017 20:28:54.023188  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetState
	I1017 20:28:54.023576  116015 main.go:141] libmachine: () Calling .GetVersion
	I1017 20:28:54.024075  116015 main.go:141] libmachine: Using API Version  1
	I1017 20:28:54.024101  116015 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 20:28:54.024430  116015 main.go:141] libmachine: () Calling .GetMachineName
	I1017 20:28:54.024958  116015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 20:28:54.025019  116015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 20:28:54.025323  116015 main.go:141] libmachine: (test-preload-567736) Calling .DriverName
	I1017 20:28:54.027279  116015 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 20:28:54.028501  116015 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:28:54.028519  116015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 20:28:54.028540  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHHostname
	I1017 20:28:54.032209  116015 main.go:141] libmachine: (test-preload-567736) DBG | domain test-preload-567736 has defined MAC address 52:54:00:ec:dd:7e in network mk-test-preload-567736
	I1017 20:28:54.032740  116015 main.go:141] libmachine: (test-preload-567736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:dd:7e", ip: ""} in network mk-test-preload-567736: {Iface:virbr1 ExpiryTime:2025-10-17 21:28:33 +0000 UTC Type:0 Mac:52:54:00:ec:dd:7e Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:test-preload-567736 Clientid:01:52:54:00:ec:dd:7e}
	I1017 20:28:54.032769  116015 main.go:141] libmachine: (test-preload-567736) DBG | domain test-preload-567736 has defined IP address 192.168.39.81 and MAC address 52:54:00:ec:dd:7e in network mk-test-preload-567736
	I1017 20:28:54.033105  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHPort
	I1017 20:28:54.033325  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHKeyPath
	I1017 20:28:54.033581  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHUsername
	I1017 20:28:54.033811  116015 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/test-preload-567736/id_rsa Username:docker}
	I1017 20:28:54.038941  116015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33335
	I1017 20:28:54.039373  116015 main.go:141] libmachine: () Calling .GetVersion
	I1017 20:28:54.039797  116015 main.go:141] libmachine: Using API Version  1
	I1017 20:28:54.039815  116015 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 20:28:54.040157  116015 main.go:141] libmachine: () Calling .GetMachineName
	I1017 20:28:54.040370  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetState
	I1017 20:28:54.042077  116015 main.go:141] libmachine: (test-preload-567736) Calling .DriverName
	I1017 20:28:54.042295  116015 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 20:28:54.042309  116015 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 20:28:54.042325  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHHostname
	I1017 20:28:54.045163  116015 main.go:141] libmachine: (test-preload-567736) DBG | domain test-preload-567736 has defined MAC address 52:54:00:ec:dd:7e in network mk-test-preload-567736
	I1017 20:28:54.045609  116015 main.go:141] libmachine: (test-preload-567736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:dd:7e", ip: ""} in network mk-test-preload-567736: {Iface:virbr1 ExpiryTime:2025-10-17 21:28:33 +0000 UTC Type:0 Mac:52:54:00:ec:dd:7e Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:test-preload-567736 Clientid:01:52:54:00:ec:dd:7e}
	I1017 20:28:54.045640  116015 main.go:141] libmachine: (test-preload-567736) DBG | domain test-preload-567736 has defined IP address 192.168.39.81 and MAC address 52:54:00:ec:dd:7e in network mk-test-preload-567736
	I1017 20:28:54.045776  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHPort
	I1017 20:28:54.045964  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHKeyPath
	I1017 20:28:54.046122  116015 main.go:141] libmachine: (test-preload-567736) Calling .GetSSHUsername
	I1017 20:28:54.046252  116015 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/test-preload-567736/id_rsa Username:docker}
	I1017 20:28:54.269785  116015 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:28:54.297932  116015 node_ready.go:35] waiting up to 6m0s for node "test-preload-567736" to be "Ready" ...
	I1017 20:28:54.444633  116015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:28:54.478964  116015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 20:28:55.155850  116015 main.go:141] libmachine: Making call to close driver server
	I1017 20:28:55.155884  116015 main.go:141] libmachine: (test-preload-567736) Calling .Close
	I1017 20:28:55.155914  116015 main.go:141] libmachine: Making call to close driver server
	I1017 20:28:55.155935  116015 main.go:141] libmachine: (test-preload-567736) Calling .Close
	I1017 20:28:55.156231  116015 main.go:141] libmachine: Successfully made call to close driver server
	I1017 20:28:55.156266  116015 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 20:28:55.156258  116015 main.go:141] libmachine: (test-preload-567736) DBG | Closing plugin on server side
	I1017 20:28:55.156276  116015 main.go:141] libmachine: Making call to close driver server
	I1017 20:28:55.156285  116015 main.go:141] libmachine: (test-preload-567736) Calling .Close
	I1017 20:28:55.156231  116015 main.go:141] libmachine: (test-preload-567736) DBG | Closing plugin on server side
	I1017 20:28:55.156236  116015 main.go:141] libmachine: Successfully made call to close driver server
	I1017 20:28:55.156355  116015 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 20:28:55.156384  116015 main.go:141] libmachine: Making call to close driver server
	I1017 20:28:55.156397  116015 main.go:141] libmachine: (test-preload-567736) Calling .Close
	I1017 20:28:55.156578  116015 main.go:141] libmachine: Successfully made call to close driver server
	I1017 20:28:55.156594  116015 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 20:28:55.156732  116015 main.go:141] libmachine: (test-preload-567736) DBG | Closing plugin on server side
	I1017 20:28:55.156884  116015 main.go:141] libmachine: Successfully made call to close driver server
	I1017 20:28:55.156929  116015 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 20:28:55.163563  116015 main.go:141] libmachine: Making call to close driver server
	I1017 20:28:55.163607  116015 main.go:141] libmachine: (test-preload-567736) Calling .Close
	I1017 20:28:55.163850  116015 main.go:141] libmachine: Successfully made call to close driver server
	I1017 20:28:55.163868  116015 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 20:28:55.163876  116015 main.go:141] libmachine: (test-preload-567736) DBG | Closing plugin on server side
	I1017 20:28:55.166465  116015 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1017 20:28:55.167593  116015 addons.go:514] duration metric: took 1.177151789s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1017 20:28:56.301281  116015 node_ready.go:57] node "test-preload-567736" has "Ready":"False" status (will retry)
	W1017 20:28:58.301778  116015 node_ready.go:57] node "test-preload-567736" has "Ready":"False" status (will retry)
	W1017 20:29:00.302076  116015 node_ready.go:57] node "test-preload-567736" has "Ready":"False" status (will retry)
	I1017 20:29:02.801338  116015 node_ready.go:49] node "test-preload-567736" is "Ready"
	I1017 20:29:02.801375  116015 node_ready.go:38] duration metric: took 8.503387017s for node "test-preload-567736" to be "Ready" ...
	I1017 20:29:02.801394  116015 api_server.go:52] waiting for apiserver process to appear ...
	I1017 20:29:02.801451  116015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:29:02.830358  116015 api_server.go:72] duration metric: took 8.839929556s to wait for apiserver process to appear ...
	I1017 20:29:02.830399  116015 api_server.go:88] waiting for apiserver healthz status ...
	I1017 20:29:02.830427  116015 api_server.go:253] Checking apiserver healthz at https://192.168.39.81:8443/healthz ...
	I1017 20:29:02.839672  116015 api_server.go:279] https://192.168.39.81:8443/healthz returned 200:
	ok
	I1017 20:29:02.840672  116015 api_server.go:141] control plane version: v1.32.0
	I1017 20:29:02.840704  116015 api_server.go:131] duration metric: took 10.294826ms to wait for apiserver health ...
	I1017 20:29:02.840716  116015 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 20:29:02.844666  116015 system_pods.go:59] 7 kube-system pods found
	I1017 20:29:02.844691  116015 system_pods.go:61] "coredns-668d6bf9bc-qktj2" [613e0d53-4928-49e0-8ff5-2a9195622ac6] Running
	I1017 20:29:02.844699  116015 system_pods.go:61] "etcd-test-preload-567736" [5c6477c5-abee-451c-8801-4aeacd42221e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 20:29:02.844704  116015 system_pods.go:61] "kube-apiserver-test-preload-567736" [a2d9a438-f8d7-4e46-8e65-fce4bcef0436] Running
	I1017 20:29:02.844715  116015 system_pods.go:61] "kube-controller-manager-test-preload-567736" [97ae8710-006b-4101-b896-661b821409e5] Running
	I1017 20:29:02.844719  116015 system_pods.go:61] "kube-proxy-glnnt" [cf250f50-85ce-4e8a-a3b7-2cbde28ed3aa] Running
	I1017 20:29:02.844722  116015 system_pods.go:61] "kube-scheduler-test-preload-567736" [fe8e8d0f-98ac-4334-81b8-6db347311ff3] Running
	I1017 20:29:02.844725  116015 system_pods.go:61] "storage-provisioner" [1cba2c5a-38dd-432f-bed0-8ad333b6e196] Running
	I1017 20:29:02.844732  116015 system_pods.go:74] duration metric: took 4.008942ms to wait for pod list to return data ...
	I1017 20:29:02.844743  116015 default_sa.go:34] waiting for default service account to be created ...
	I1017 20:29:02.848292  116015 default_sa.go:45] found service account: "default"
	I1017 20:29:02.848310  116015 default_sa.go:55] duration metric: took 3.561899ms for default service account to be created ...
	I1017 20:29:02.848320  116015 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 20:29:02.850778  116015 system_pods.go:86] 7 kube-system pods found
	I1017 20:29:02.850800  116015 system_pods.go:89] "coredns-668d6bf9bc-qktj2" [613e0d53-4928-49e0-8ff5-2a9195622ac6] Running
	I1017 20:29:02.850807  116015 system_pods.go:89] "etcd-test-preload-567736" [5c6477c5-abee-451c-8801-4aeacd42221e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 20:29:02.850812  116015 system_pods.go:89] "kube-apiserver-test-preload-567736" [a2d9a438-f8d7-4e46-8e65-fce4bcef0436] Running
	I1017 20:29:02.850817  116015 system_pods.go:89] "kube-controller-manager-test-preload-567736" [97ae8710-006b-4101-b896-661b821409e5] Running
	I1017 20:29:02.850825  116015 system_pods.go:89] "kube-proxy-glnnt" [cf250f50-85ce-4e8a-a3b7-2cbde28ed3aa] Running
	I1017 20:29:02.850829  116015 system_pods.go:89] "kube-scheduler-test-preload-567736" [fe8e8d0f-98ac-4334-81b8-6db347311ff3] Running
	I1017 20:29:02.850831  116015 system_pods.go:89] "storage-provisioner" [1cba2c5a-38dd-432f-bed0-8ad333b6e196] Running
	I1017 20:29:02.850838  116015 system_pods.go:126] duration metric: took 2.512798ms to wait for k8s-apps to be running ...
	I1017 20:29:02.850845  116015 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 20:29:02.850886  116015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:29:02.869939  116015 system_svc.go:56] duration metric: took 19.081506ms WaitForService to wait for kubelet
	I1017 20:29:02.869971  116015 kubeadm.go:586] duration metric: took 8.879555205s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:29:02.870004  116015 node_conditions.go:102] verifying NodePressure condition ...
	I1017 20:29:02.873219  116015 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1017 20:29:02.873254  116015 node_conditions.go:123] node cpu capacity is 2
	I1017 20:29:02.873269  116015 node_conditions.go:105] duration metric: took 3.260369ms to run NodePressure ...
	I1017 20:29:02.873285  116015 start.go:241] waiting for startup goroutines ...
	I1017 20:29:02.873295  116015 start.go:246] waiting for cluster config update ...
	I1017 20:29:02.873308  116015 start.go:255] writing updated cluster config ...
	I1017 20:29:02.873701  116015 ssh_runner.go:195] Run: rm -f paused
	I1017 20:29:02.880317  116015 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:29:02.880973  116015 kapi.go:59] client config for test-preload-567736: &rest.Config{Host:"https://192.168.39.81:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21753-75534/.minikube/profiles/test-preload-567736/client.crt", KeyFile:"/home/jenkins/minikube-integration/21753-75534/.minikube/profiles/test-preload-567736/client.key", CAFile:"/home/jenkins/minikube-integration/21753-75534/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(n
il), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819bc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1017 20:29:02.884897  116015 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-qktj2" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:29:02.889924  116015 pod_ready.go:94] pod "coredns-668d6bf9bc-qktj2" is "Ready"
	I1017 20:29:02.889950  116015 pod_ready.go:86] duration metric: took 5.027815ms for pod "coredns-668d6bf9bc-qktj2" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:29:02.893649  116015 pod_ready.go:83] waiting for pod "etcd-test-preload-567736" in "kube-system" namespace to be "Ready" or be gone ...
	W1017 20:29:04.900314  116015 pod_ready.go:104] pod "etcd-test-preload-567736" is not "Ready", error: <nil>
	I1017 20:29:05.399399  116015 pod_ready.go:94] pod "etcd-test-preload-567736" is "Ready"
	I1017 20:29:05.399436  116015 pod_ready.go:86] duration metric: took 2.505763209s for pod "etcd-test-preload-567736" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:29:05.401678  116015 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-567736" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:29:05.406334  116015 pod_ready.go:94] pod "kube-apiserver-test-preload-567736" is "Ready"
	I1017 20:29:05.406369  116015 pod_ready.go:86] duration metric: took 4.66452ms for pod "kube-apiserver-test-preload-567736" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:29:05.408564  116015 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-567736" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:29:05.414502  116015 pod_ready.go:94] pod "kube-controller-manager-test-preload-567736" is "Ready"
	I1017 20:29:05.414530  116015 pod_ready.go:86] duration metric: took 5.942845ms for pod "kube-controller-manager-test-preload-567736" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:29:05.484543  116015 pod_ready.go:83] waiting for pod "kube-proxy-glnnt" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:29:05.885332  116015 pod_ready.go:94] pod "kube-proxy-glnnt" is "Ready"
	I1017 20:29:05.885359  116015 pod_ready.go:86] duration metric: took 400.774783ms for pod "kube-proxy-glnnt" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:29:06.084646  116015 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-567736" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:29:06.485113  116015 pod_ready.go:94] pod "kube-scheduler-test-preload-567736" is "Ready"
	I1017 20:29:06.485140  116015 pod_ready.go:86] duration metric: took 400.469244ms for pod "kube-scheduler-test-preload-567736" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:29:06.485151  116015 pod_ready.go:40] duration metric: took 3.604792362s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:29:06.529616  116015 start.go:624] kubectl: 1.34.1, cluster: 1.32.0 (minor skew: 2)
	I1017 20:29:06.531337  116015 out.go:203] 
	W1017 20:29:06.532715  116015 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.32.0.
	I1017 20:29:06.534006  116015 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1017 20:29:06.535573  116015 out.go:179] * Done! kubectl is now configured to use "test-preload-567736" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 17 20:29:07 test-preload-567736 crio[832]: time="2025-10-17 20:29:07.497565857Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760732947497469641,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ab29ea17-ad93-4ca0-998d-e687fac994e3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 17 20:29:07 test-preload-567736 crio[832]: time="2025-10-17 20:29:07.498547510Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=95751de4-20bf-4f52-8360-394044650084 name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 20:29:07 test-preload-567736 crio[832]: time="2025-10-17 20:29:07.498604254Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=95751de4-20bf-4f52-8360-394044650084 name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 20:29:07 test-preload-567736 crio[832]: time="2025-10-17 20:29:07.498796347Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:38e00d4b6b734ca5f78b3573b6728dcdabe5bcd3d46d5b5ed0c12f00fa49f0e1,PodSandboxId:1b485da33fd3057529fccd52681b9ba429beef3cff682a0354f068e07d6835b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760732940592712016,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-qktj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 613e0d53-4928-49e0-8ff5-2a9195622ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f663791b97dd0eab4a150f1a848cf6d395dee426412e09a61f7051542d208672,PodSandboxId:bc63b300e3dc34ff57adc117a1cc87291afd2ef43a0ea842ce2f5077f6f71515,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760732933015640779,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-glnnt,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: cf250f50-85ce-4e8a-a3b7-2cbde28ed3aa,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:013c2197e7388db6b37e224974f2986e502d0afc3b20b069f445c9a442bbc11c,PodSandboxId:1c3137f4a1f1e549251519d8ac2f2abd1ed8502b0115111f4587dbcefa1a55a2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760732933054667951,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c
ba2c5a-38dd-432f-bed0-8ad333b6e196,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb01a31cd0e2d957a9eb0f71c360e31090b25f04ecf1d83273302106b0e965b7,PodSandboxId:b974764ef0430d62e9f769680bd54f1e4eb7a57136c4f3829e555f2466e75ee7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760732928349141194,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-567736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa86d3531e2e9a798c294c483291e5e,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e31154beb7b96f14fc9ecba71e77afb33caa940093a03bca81105e67fc9b97a,PodSandboxId:fbffb7ed3a5779edaf27573a0195ec47b9e2ba8cd2327643b2fd0d77331ce3b4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760732928394975468,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-567736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79e3e5238443285c0ed4375ce61620be,},Annotations:map
[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:412119b5d703857eb58042f34d6bc051729002edc6d7d30e88f65f4c5282b593,PodSandboxId:75af151300f8d0d30ec14ae064f39ae123c303d73acdd488732540236abc84a1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760732928350144069,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-567736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 981f2f70b81fa359d083be09f64f4c0d,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad9e681cfd5134162bac4154b22dbe48b4d446f99d5e31f276072ee02235cc69,PodSandboxId:a76d44a1762b918fe3a3c780814c3faf71a9eb4ad4948d9b279bf7b92243c29d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760732928365561870,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-567736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2047d87fdeb0e944cd7f82aba842227,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=95751de4-20bf-4f52-8360-394044650084 name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 20:29:07 test-preload-567736 crio[832]: time="2025-10-17 20:29:07.542242973Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0f12c575-7b94-477b-a363-26c126e39f80 name=/runtime.v1.RuntimeService/Version
	Oct 17 20:29:07 test-preload-567736 crio[832]: time="2025-10-17 20:29:07.542419531Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0f12c575-7b94-477b-a363-26c126e39f80 name=/runtime.v1.RuntimeService/Version
	Oct 17 20:29:07 test-preload-567736 crio[832]: time="2025-10-17 20:29:07.544276828Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f1703663-af6c-4ef4-95d4-a5b1a1fea4c5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 17 20:29:07 test-preload-567736 crio[832]: time="2025-10-17 20:29:07.544781351Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760732947544756623,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f1703663-af6c-4ef4-95d4-a5b1a1fea4c5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 17 20:29:07 test-preload-567736 crio[832]: time="2025-10-17 20:29:07.545296298Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=19b43c35-c921-4161-b256-c29f6f8d9d5c name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 20:29:07 test-preload-567736 crio[832]: time="2025-10-17 20:29:07.545350790Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=19b43c35-c921-4161-b256-c29f6f8d9d5c name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 20:29:07 test-preload-567736 crio[832]: time="2025-10-17 20:29:07.545575115Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:38e00d4b6b734ca5f78b3573b6728dcdabe5bcd3d46d5b5ed0c12f00fa49f0e1,PodSandboxId:1b485da33fd3057529fccd52681b9ba429beef3cff682a0354f068e07d6835b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760732940592712016,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-qktj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 613e0d53-4928-49e0-8ff5-2a9195622ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f663791b97dd0eab4a150f1a848cf6d395dee426412e09a61f7051542d208672,PodSandboxId:bc63b300e3dc34ff57adc117a1cc87291afd2ef43a0ea842ce2f5077f6f71515,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760732933015640779,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-glnnt,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: cf250f50-85ce-4e8a-a3b7-2cbde28ed3aa,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:013c2197e7388db6b37e224974f2986e502d0afc3b20b069f445c9a442bbc11c,PodSandboxId:1c3137f4a1f1e549251519d8ac2f2abd1ed8502b0115111f4587dbcefa1a55a2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760732933054667951,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c
ba2c5a-38dd-432f-bed0-8ad333b6e196,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb01a31cd0e2d957a9eb0f71c360e31090b25f04ecf1d83273302106b0e965b7,PodSandboxId:b974764ef0430d62e9f769680bd54f1e4eb7a57136c4f3829e555f2466e75ee7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760732928349141194,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-567736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa86d3531e2e9a798c294c483291e5e,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e31154beb7b96f14fc9ecba71e77afb33caa940093a03bca81105e67fc9b97a,PodSandboxId:fbffb7ed3a5779edaf27573a0195ec47b9e2ba8cd2327643b2fd0d77331ce3b4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760732928394975468,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-567736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79e3e5238443285c0ed4375ce61620be,},Annotations:map
[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:412119b5d703857eb58042f34d6bc051729002edc6d7d30e88f65f4c5282b593,PodSandboxId:75af151300f8d0d30ec14ae064f39ae123c303d73acdd488732540236abc84a1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760732928350144069,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-567736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 981f2f70b81fa359d083be09f64f4c0d,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad9e681cfd5134162bac4154b22dbe48b4d446f99d5e31f276072ee02235cc69,PodSandboxId:a76d44a1762b918fe3a3c780814c3faf71a9eb4ad4948d9b279bf7b92243c29d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760732928365561870,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-567736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2047d87fdeb0e944cd7f82aba842227,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=19b43c35-c921-4161-b256-c29f6f8d9d5c name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 20:29:07 test-preload-567736 crio[832]: time="2025-10-17 20:29:07.587939548Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9436da50-8be0-4246-ac28-e56bef94d789 name=/runtime.v1.RuntimeService/Version
	Oct 17 20:29:07 test-preload-567736 crio[832]: time="2025-10-17 20:29:07.588398277Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9436da50-8be0-4246-ac28-e56bef94d789 name=/runtime.v1.RuntimeService/Version
	Oct 17 20:29:07 test-preload-567736 crio[832]: time="2025-10-17 20:29:07.589982319Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aaddad20-e52d-4c01-b28f-b6cb48ef4d9d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 17 20:29:07 test-preload-567736 crio[832]: time="2025-10-17 20:29:07.590438791Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760732947590415496,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aaddad20-e52d-4c01-b28f-b6cb48ef4d9d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 17 20:29:07 test-preload-567736 crio[832]: time="2025-10-17 20:29:07.591032272Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1379ff03-3972-41fb-a09d-062b8bef1805 name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 20:29:07 test-preload-567736 crio[832]: time="2025-10-17 20:29:07.591084820Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1379ff03-3972-41fb-a09d-062b8bef1805 name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 20:29:07 test-preload-567736 crio[832]: time="2025-10-17 20:29:07.591290991Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:38e00d4b6b734ca5f78b3573b6728dcdabe5bcd3d46d5b5ed0c12f00fa49f0e1,PodSandboxId:1b485da33fd3057529fccd52681b9ba429beef3cff682a0354f068e07d6835b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760732940592712016,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-qktj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 613e0d53-4928-49e0-8ff5-2a9195622ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f663791b97dd0eab4a150f1a848cf6d395dee426412e09a61f7051542d208672,PodSandboxId:bc63b300e3dc34ff57adc117a1cc87291afd2ef43a0ea842ce2f5077f6f71515,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760732933015640779,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-glnnt,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: cf250f50-85ce-4e8a-a3b7-2cbde28ed3aa,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:013c2197e7388db6b37e224974f2986e502d0afc3b20b069f445c9a442bbc11c,PodSandboxId:1c3137f4a1f1e549251519d8ac2f2abd1ed8502b0115111f4587dbcefa1a55a2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760732933054667951,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c
ba2c5a-38dd-432f-bed0-8ad333b6e196,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb01a31cd0e2d957a9eb0f71c360e31090b25f04ecf1d83273302106b0e965b7,PodSandboxId:b974764ef0430d62e9f769680bd54f1e4eb7a57136c4f3829e555f2466e75ee7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760732928349141194,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-567736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa86d3531e2e9a798c294c483291e5e,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e31154beb7b96f14fc9ecba71e77afb33caa940093a03bca81105e67fc9b97a,PodSandboxId:fbffb7ed3a5779edaf27573a0195ec47b9e2ba8cd2327643b2fd0d77331ce3b4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760732928394975468,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-567736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79e3e5238443285c0ed4375ce61620be,},Annotations:map
[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:412119b5d703857eb58042f34d6bc051729002edc6d7d30e88f65f4c5282b593,PodSandboxId:75af151300f8d0d30ec14ae064f39ae123c303d73acdd488732540236abc84a1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760732928350144069,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-567736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 981f2f70b81fa359d083be09f64f4c0d,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad9e681cfd5134162bac4154b22dbe48b4d446f99d5e31f276072ee02235cc69,PodSandboxId:a76d44a1762b918fe3a3c780814c3faf71a9eb4ad4948d9b279bf7b92243c29d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760732928365561870,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-567736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2047d87fdeb0e944cd7f82aba842227,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1379ff03-3972-41fb-a09d-062b8bef1805 name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 20:29:07 test-preload-567736 crio[832]: time="2025-10-17 20:29:07.628195743Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=271cb8aa-d5cf-4952-ba1c-9577dc6383ec name=/runtime.v1.RuntimeService/Version
	Oct 17 20:29:07 test-preload-567736 crio[832]: time="2025-10-17 20:29:07.628286079Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=271cb8aa-d5cf-4952-ba1c-9577dc6383ec name=/runtime.v1.RuntimeService/Version
	Oct 17 20:29:07 test-preload-567736 crio[832]: time="2025-10-17 20:29:07.629392572Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=208ea488-5bb8-40fb-a695-e0947dc0ece5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 17 20:29:07 test-preload-567736 crio[832]: time="2025-10-17 20:29:07.629872717Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760732947629850354,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=208ea488-5bb8-40fb-a695-e0947dc0ece5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 17 20:29:07 test-preload-567736 crio[832]: time="2025-10-17 20:29:07.630429570Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=227a9db6-e814-4703-9a44-d48e562cfedc name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 20:29:07 test-preload-567736 crio[832]: time="2025-10-17 20:29:07.630484905Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=227a9db6-e814-4703-9a44-d48e562cfedc name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 20:29:07 test-preload-567736 crio[832]: time="2025-10-17 20:29:07.630744713Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:38e00d4b6b734ca5f78b3573b6728dcdabe5bcd3d46d5b5ed0c12f00fa49f0e1,PodSandboxId:1b485da33fd3057529fccd52681b9ba429beef3cff682a0354f068e07d6835b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760732940592712016,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-qktj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 613e0d53-4928-49e0-8ff5-2a9195622ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f663791b97dd0eab4a150f1a848cf6d395dee426412e09a61f7051542d208672,PodSandboxId:bc63b300e3dc34ff57adc117a1cc87291afd2ef43a0ea842ce2f5077f6f71515,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760732933015640779,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-glnnt,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: cf250f50-85ce-4e8a-a3b7-2cbde28ed3aa,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:013c2197e7388db6b37e224974f2986e502d0afc3b20b069f445c9a442bbc11c,PodSandboxId:1c3137f4a1f1e549251519d8ac2f2abd1ed8502b0115111f4587dbcefa1a55a2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760732933054667951,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c
ba2c5a-38dd-432f-bed0-8ad333b6e196,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb01a31cd0e2d957a9eb0f71c360e31090b25f04ecf1d83273302106b0e965b7,PodSandboxId:b974764ef0430d62e9f769680bd54f1e4eb7a57136c4f3829e555f2466e75ee7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760732928349141194,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-567736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa86d3531e2e9a798c294c483291e5e,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e31154beb7b96f14fc9ecba71e77afb33caa940093a03bca81105e67fc9b97a,PodSandboxId:fbffb7ed3a5779edaf27573a0195ec47b9e2ba8cd2327643b2fd0d77331ce3b4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760732928394975468,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-567736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79e3e5238443285c0ed4375ce61620be,},Annotations:map
[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:412119b5d703857eb58042f34d6bc051729002edc6d7d30e88f65f4c5282b593,PodSandboxId:75af151300f8d0d30ec14ae064f39ae123c303d73acdd488732540236abc84a1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760732928350144069,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-567736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 981f2f70b81fa359d083be09f64f4c0d,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad9e681cfd5134162bac4154b22dbe48b4d446f99d5e31f276072ee02235cc69,PodSandboxId:a76d44a1762b918fe3a3c780814c3faf71a9eb4ad4948d9b279bf7b92243c29d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760732928365561870,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-567736,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2047d87fdeb0e944cd7f82aba842227,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=227a9db6-e814-4703-9a44-d48e562cfedc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	38e00d4b6b734       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   7 seconds ago       Running             coredns                   1                   1b485da33fd30       coredns-668d6bf9bc-qktj2
	013c2197e7388       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       1                   1c3137f4a1f1e       storage-provisioner
	f663791b97dd0       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   14 seconds ago      Running             kube-proxy                1                   bc63b300e3dc3       kube-proxy-glnnt
	1e31154beb7b9       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   19 seconds ago      Running             kube-apiserver            1                   fbffb7ed3a577       kube-apiserver-test-preload-567736
	ad9e681cfd513       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   19 seconds ago      Running             kube-scheduler            1                   a76d44a1762b9       kube-scheduler-test-preload-567736
	412119b5d7038       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   19 seconds ago      Running             kube-controller-manager   1                   75af151300f8d       kube-controller-manager-test-preload-567736
	bb01a31cd0e2d       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   19 seconds ago      Running             etcd                      1                   b974764ef0430       etcd-test-preload-567736
	
	
	==> coredns [38e00d4b6b734ca5f78b3573b6728dcdabe5bcd3d46d5b5ed0c12f00fa49f0e1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:41431 - 65509 "HINFO IN 8307820555970955968.5240769412588598187. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.03541014s
	
	
	==> describe nodes <==
	Name:               test-preload-567736
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-567736
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=test-preload-567736
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T20_27_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:27:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-567736
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:29:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:29:02 +0000   Fri, 17 Oct 2025 20:27:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:29:02 +0000   Fri, 17 Oct 2025 20:27:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:29:02 +0000   Fri, 17 Oct 2025 20:27:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 20:29:02 +0000   Fri, 17 Oct 2025 20:29:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.81
	  Hostname:    test-preload-567736
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 5dc37ba87af14a69a21cc9e35cdeba2e
	  System UUID:                5dc37ba8-7af1-4a69-a21c-c9e35cdeba2e
	  Boot ID:                    292ec6e8-d14d-413a-99e2-248bab4cb590
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-qktj2                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     69s
	  kube-system                 etcd-test-preload-567736                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         75s
	  kube-system                 kube-apiserver-test-preload-567736             250m (12%)    0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 kube-controller-manager-test-preload-567736    200m (10%)    0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 kube-proxy-glnnt                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 kube-scheduler-test-preload-567736             100m (5%)     0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         67s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 68s                kube-proxy       
	  Normal   Starting                 14s                kube-proxy       
	  Normal   NodeHasSufficientMemory  74s                kubelet          Node test-preload-567736 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  74s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    74s                kubelet          Node test-preload-567736 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     74s                kubelet          Node test-preload-567736 status is now: NodeHasSufficientPID
	  Normal   Starting                 74s                kubelet          Starting kubelet.
	  Normal   NodeReady                73s                kubelet          Node test-preload-567736 status is now: NodeReady
	  Normal   RegisteredNode           70s                node-controller  Node test-preload-567736 event: Registered Node test-preload-567736 in Controller
	  Normal   Starting                 21s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  21s (x8 over 21s)  kubelet          Node test-preload-567736 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21s (x8 over 21s)  kubelet          Node test-preload-567736 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21s (x7 over 21s)  kubelet          Node test-preload-567736 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 15s                kubelet          Node test-preload-567736 has been rebooted, boot id: 292ec6e8-d14d-413a-99e2-248bab4cb590
	  Normal   RegisteredNode           12s                node-controller  Node test-preload-567736 event: Registered Node test-preload-567736 in Controller
	
	
	==> dmesg <==
	[Oct17 20:28] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000050] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.007104] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.952791] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000019] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000005] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.088528] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.095952] kauditd_printk_skb: 102 callbacks suppressed
	[  +6.480817] kauditd_printk_skb: 177 callbacks suppressed
	[Oct17 20:29] kauditd_printk_skb: 128 callbacks suppressed
	
	
	==> etcd [bb01a31cd0e2d957a9eb0f71c360e31090b25f04ecf1d83273302106b0e965b7] <==
	{"level":"info","ts":"2025-10-17T20:28:48.894136Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-17T20:28:48.900856Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-17T20:28:48.901186Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"81f5d9acb096f107","initial-advertise-peer-urls":["https://192.168.39.81:2380"],"listen-peer-urls":["https://192.168.39.81:2380"],"advertise-client-urls":["https://192.168.39.81:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.81:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-17T20:28:48.902603Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-17T20:28:48.893313Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-17T20:28:48.902767Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-17T20:28:48.902789Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-17T20:28:48.902883Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.81:2380"}
	{"level":"info","ts":"2025-10-17T20:28:48.903544Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.81:2380"}
	{"level":"info","ts":"2025-10-17T20:28:50.751367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-17T20:28:50.751407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-17T20:28:50.751474Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 received MsgPreVoteResp from 81f5d9acb096f107 at term 2"}
	{"level":"info","ts":"2025-10-17T20:28:50.751567Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 became candidate at term 3"}
	{"level":"info","ts":"2025-10-17T20:28:50.751584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 received MsgVoteResp from 81f5d9acb096f107 at term 3"}
	{"level":"info","ts":"2025-10-17T20:28:50.751592Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 became leader at term 3"}
	{"level":"info","ts":"2025-10-17T20:28:50.751599Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 81f5d9acb096f107 elected leader 81f5d9acb096f107 at term 3"}
	{"level":"info","ts":"2025-10-17T20:28:50.753150Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"81f5d9acb096f107","local-member-attributes":"{Name:test-preload-567736 ClientURLs:[https://192.168.39.81:2379]}","request-path":"/0/members/81f5d9acb096f107/attributes","cluster-id":"a77bf2d9a9fbb59e","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-17T20:28:50.753246Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-17T20:28:50.753421Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-17T20:28:50.753733Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-17T20:28:50.753760Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-17T20:28:50.754356Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-17T20:28:50.754400Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-17T20:28:50.755169Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.81:2379"}
	{"level":"info","ts":"2025-10-17T20:28:50.755307Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 20:29:07 up 0 min,  0 users,  load average: 0.92, 0.27, 0.09
	Linux test-preload-567736 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [1e31154beb7b96f14fc9ecba71e77afb33caa940093a03bca81105e67fc9b97a] <==
	I1017 20:28:51.978063       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1017 20:28:51.978279       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1017 20:28:51.978304       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1017 20:28:51.978391       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 20:28:51.996704       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1017 20:28:51.997101       1 aggregator.go:171] initial CRD sync complete...
	I1017 20:28:51.997131       1 autoregister_controller.go:144] Starting autoregister controller
	I1017 20:28:51.997147       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 20:28:51.997162       1 cache.go:39] Caches are synced for autoregister controller
	I1017 20:28:52.011075       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1017 20:28:52.024973       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 20:28:52.033209       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1017 20:28:52.033248       1 policy_source.go:240] refreshing policies
	I1017 20:28:52.065568       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1017 20:28:52.067896       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1017 20:28:52.077220       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 20:28:52.556455       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1017 20:28:52.874116       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 20:28:53.797306       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1017 20:28:53.832779       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1017 20:28:53.867567       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 20:28:53.875472       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 20:28:55.172768       1 controller.go:615] quota admission added evaluator for: endpoints
	I1017 20:28:55.260804       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 20:28:55.561910       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [412119b5d703857eb58042f34d6bc051729002edc6d7d30e88f65f4c5282b593] <==
	I1017 20:28:55.209724       1 shared_informer.go:320] Caches are synced for TTL after finished
	I1017 20:28:55.210023       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1017 20:28:55.213344       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1017 20:28:55.215471       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1017 20:28:55.217910       1 shared_informer.go:320] Caches are synced for crt configmap
	I1017 20:28:55.219922       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1017 20:28:55.222166       1 shared_informer.go:320] Caches are synced for node
	I1017 20:28:55.222233       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1017 20:28:55.222266       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1017 20:28:55.222272       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1017 20:28:55.222276       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1017 20:28:55.222333       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-567736"
	I1017 20:28:55.229824       1 shared_informer.go:320] Caches are synced for persistent volume
	I1017 20:28:55.233323       1 shared_informer.go:320] Caches are synced for deployment
	I1017 20:28:55.234247       1 shared_informer.go:320] Caches are synced for PVC protection
	I1017 20:28:55.241811       1 shared_informer.go:320] Caches are synced for resource quota
	I1017 20:28:55.248170       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1017 20:28:55.574065       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="325.612974ms"
	I1017 20:28:55.574591       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="268.793µs"
	I1017 20:29:00.741069       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="71.701µs"
	I1017 20:29:00.779251       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="16.785833ms"
	I1017 20:29:00.780744       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="49.883µs"
	I1017 20:29:02.329055       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-567736"
	I1017 20:29:02.344859       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-567736"
	I1017 20:29:05.210083       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [f663791b97dd0eab4a150f1a848cf6d395dee426412e09a61f7051542d208672] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1017 20:28:53.352444       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1017 20:28:53.364424       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.81"]
	E1017 20:28:53.364486       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 20:28:53.404039       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1017 20:28:53.404092       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1017 20:28:53.404125       1 server_linux.go:170] "Using iptables Proxier"
	I1017 20:28:53.407811       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 20:28:53.408041       1 server.go:497] "Version info" version="v1.32.0"
	I1017 20:28:53.408052       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:28:53.409617       1 config.go:199] "Starting service config controller"
	I1017 20:28:53.409657       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1017 20:28:53.409708       1 config.go:105] "Starting endpoint slice config controller"
	I1017 20:28:53.409713       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1017 20:28:53.412162       1 config.go:329] "Starting node config controller"
	I1017 20:28:53.412198       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1017 20:28:53.509858       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1017 20:28:53.509887       1 shared_informer.go:320] Caches are synced for service config
	I1017 20:28:53.512414       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ad9e681cfd5134162bac4154b22dbe48b4d446f99d5e31f276072ee02235cc69] <==
	I1017 20:28:49.445557       1 serving.go:386] Generated self-signed cert in-memory
	W1017 20:28:51.919225       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1017 20:28:51.919320       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1017 20:28:51.919344       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1017 20:28:51.919367       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1017 20:28:52.021293       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1017 20:28:52.021343       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:28:52.029340       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1017 20:28:52.029873       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 20:28:52.032983       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:28:52.033347       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1017 20:28:52.133965       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 17 20:28:52 test-preload-567736 kubelet[1164]: I1017 20:28:52.121614    1164 setters.go:602] "Node became not ready" node="test-preload-567736" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-17T20:28:52Z","lastTransitionTime":"2025-10-17T20:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"}
	Oct 17 20:28:52 test-preload-567736 kubelet[1164]: I1017 20:28:52.141059    1164 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-test-preload-567736"
	Oct 17 20:28:52 test-preload-567736 kubelet[1164]: E1017 20:28:52.152944    1164 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-567736\" already exists" pod="kube-system/kube-controller-manager-test-preload-567736"
	Oct 17 20:28:52 test-preload-567736 kubelet[1164]: I1017 20:28:52.492348    1164 apiserver.go:52] "Watching apiserver"
	Oct 17 20:28:52 test-preload-567736 kubelet[1164]: E1017 20:28:52.496884    1164 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-qktj2" podUID="613e0d53-4928-49e0-8ff5-2a9195622ac6"
	Oct 17 20:28:52 test-preload-567736 kubelet[1164]: I1017 20:28:52.516341    1164 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Oct 17 20:28:52 test-preload-567736 kubelet[1164]: I1017 20:28:52.548235    1164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf250f50-85ce-4e8a-a3b7-2cbde28ed3aa-lib-modules\") pod \"kube-proxy-glnnt\" (UID: \"cf250f50-85ce-4e8a-a3b7-2cbde28ed3aa\") " pod="kube-system/kube-proxy-glnnt"
	Oct 17 20:28:52 test-preload-567736 kubelet[1164]: I1017 20:28:52.548735    1164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1cba2c5a-38dd-432f-bed0-8ad333b6e196-tmp\") pod \"storage-provisioner\" (UID: \"1cba2c5a-38dd-432f-bed0-8ad333b6e196\") " pod="kube-system/storage-provisioner"
	Oct 17 20:28:52 test-preload-567736 kubelet[1164]: I1017 20:28:52.548780    1164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf250f50-85ce-4e8a-a3b7-2cbde28ed3aa-xtables-lock\") pod \"kube-proxy-glnnt\" (UID: \"cf250f50-85ce-4e8a-a3b7-2cbde28ed3aa\") " pod="kube-system/kube-proxy-glnnt"
	Oct 17 20:28:52 test-preload-567736 kubelet[1164]: E1017 20:28:52.550348    1164 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 17 20:28:52 test-preload-567736 kubelet[1164]: E1017 20:28:52.550612    1164 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/613e0d53-4928-49e0-8ff5-2a9195622ac6-config-volume podName:613e0d53-4928-49e0-8ff5-2a9195622ac6 nodeName:}" failed. No retries permitted until 2025-10-17 20:28:53.050588311 +0000 UTC m=+6.656443681 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/613e0d53-4928-49e0-8ff5-2a9195622ac6-config-volume") pod "coredns-668d6bf9bc-qktj2" (UID: "613e0d53-4928-49e0-8ff5-2a9195622ac6") : object "kube-system"/"coredns" not registered
	Oct 17 20:28:52 test-preload-567736 kubelet[1164]: I1017 20:28:52.677762    1164 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-test-preload-567736"
	Oct 17 20:28:52 test-preload-567736 kubelet[1164]: E1017 20:28:52.689531    1164 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-567736\" already exists" pod="kube-system/kube-apiserver-test-preload-567736"
	Oct 17 20:28:53 test-preload-567736 kubelet[1164]: E1017 20:28:53.052703    1164 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 17 20:28:53 test-preload-567736 kubelet[1164]: E1017 20:28:53.054017    1164 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/613e0d53-4928-49e0-8ff5-2a9195622ac6-config-volume podName:613e0d53-4928-49e0-8ff5-2a9195622ac6 nodeName:}" failed. No retries permitted until 2025-10-17 20:28:54.052806162 +0000 UTC m=+7.658661544 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/613e0d53-4928-49e0-8ff5-2a9195622ac6-config-volume") pod "coredns-668d6bf9bc-qktj2" (UID: "613e0d53-4928-49e0-8ff5-2a9195622ac6") : object "kube-system"/"coredns" not registered
	Oct 17 20:28:54 test-preload-567736 kubelet[1164]: E1017 20:28:54.060005    1164 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 17 20:28:54 test-preload-567736 kubelet[1164]: E1017 20:28:54.060066    1164 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/613e0d53-4928-49e0-8ff5-2a9195622ac6-config-volume podName:613e0d53-4928-49e0-8ff5-2a9195622ac6 nodeName:}" failed. No retries permitted until 2025-10-17 20:28:56.060055037 +0000 UTC m=+9.665910419 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/613e0d53-4928-49e0-8ff5-2a9195622ac6-config-volume") pod "coredns-668d6bf9bc-qktj2" (UID: "613e0d53-4928-49e0-8ff5-2a9195622ac6") : object "kube-system"/"coredns" not registered
	Oct 17 20:28:54 test-preload-567736 kubelet[1164]: E1017 20:28:54.544204    1164 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-qktj2" podUID="613e0d53-4928-49e0-8ff5-2a9195622ac6"
	Oct 17 20:28:56 test-preload-567736 kubelet[1164]: E1017 20:28:56.073345    1164 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 17 20:28:56 test-preload-567736 kubelet[1164]: E1017 20:28:56.073433    1164 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/613e0d53-4928-49e0-8ff5-2a9195622ac6-config-volume podName:613e0d53-4928-49e0-8ff5-2a9195622ac6 nodeName:}" failed. No retries permitted until 2025-10-17 20:29:00.073419543 +0000 UTC m=+13.679274925 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/613e0d53-4928-49e0-8ff5-2a9195622ac6-config-volume") pod "coredns-668d6bf9bc-qktj2" (UID: "613e0d53-4928-49e0-8ff5-2a9195622ac6") : object "kube-system"/"coredns" not registered
	Oct 17 20:28:56 test-preload-567736 kubelet[1164]: E1017 20:28:56.543299    1164 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-qktj2" podUID="613e0d53-4928-49e0-8ff5-2a9195622ac6"
	Oct 17 20:28:56 test-preload-567736 kubelet[1164]: E1017 20:28:56.586644    1164 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760732936586295834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 17 20:28:56 test-preload-567736 kubelet[1164]: E1017 20:28:56.586685    1164 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760732936586295834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 17 20:29:06 test-preload-567736 kubelet[1164]: E1017 20:29:06.588384    1164 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760732946588139475,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 17 20:29:06 test-preload-567736 kubelet[1164]: E1017 20:29:06.588426    1164 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760732946588139475,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [013c2197e7388db6b37e224974f2986e502d0afc3b20b069f445c9a442bbc11c] <==
	I1017 20:28:53.218981       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-567736 -n test-preload-567736
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-567736 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-567736" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-567736
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-567736: (1.008507651s)
--- FAIL: TestPreload (129.11s)

                                                
                                    

Test pass (233/270)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.59
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.15
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 5.21
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.06
18 TestDownloadOnly/v1.34.1/DeleteAll 0.15
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.65
22 TestOffline 62.63
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 139.78
31 TestAddons/serial/GCPAuth/Namespaces 0.17
32 TestAddons/serial/GCPAuth/FakeCredentials 9.56
35 TestAddons/parallel/Registry 18.36
36 TestAddons/parallel/RegistryCreds 0.82
38 TestAddons/parallel/InspektorGadget 5.63
39 TestAddons/parallel/MetricsServer 6.7
41 TestAddons/parallel/CSI 55.46
42 TestAddons/parallel/Headlamp 22.91
43 TestAddons/parallel/CloudSpanner 6.2
44 TestAddons/parallel/LocalPath 53.66
45 TestAddons/parallel/NvidiaDevicePlugin 6.12
46 TestAddons/parallel/Yakd 11.52
48 TestAddons/StoppedEnableDisable 88.44
49 TestCertOptions 64.73
50 TestCertExpiration 320.38
52 TestForceSystemdFlag 71.47
53 TestForceSystemdEnv 44.73
55 TestKVMDriverInstallOrUpdate 0.56
59 TestErrorSpam/setup 38.85
60 TestErrorSpam/start 0.35
61 TestErrorSpam/status 0.8
62 TestErrorSpam/pause 1.74
63 TestErrorSpam/unpause 1.99
64 TestErrorSpam/stop 92.01
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 78.06
69 TestFunctional/serial/AuditLog 0
71 TestFunctional/serial/KubeContext 0.05
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.3
76 TestFunctional/serial/CacheCmd/cache/add_local 1.15
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.68
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
85 TestFunctional/delete_echo-server_images 0
86 TestFunctional/delete_my-image_image 0
87 TestFunctional/delete_minikube_cached_images 0
92 TestMultiControlPlane/serial/StartCluster 207.87
93 TestMultiControlPlane/serial/DeployApp 7.29
94 TestMultiControlPlane/serial/PingHostFromPods 1.29
95 TestMultiControlPlane/serial/AddWorkerNode 75.66
96 TestMultiControlPlane/serial/NodeLabels 0.07
97 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.91
98 TestMultiControlPlane/serial/CopyFile 13.29
99 TestMultiControlPlane/serial/StopSecondaryNode 84.35
100 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.69
101 TestMultiControlPlane/serial/RestartSecondaryNode 36.88
102 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.97
103 TestMultiControlPlane/serial/RestartClusterKeepsNodes 392.24
104 TestMultiControlPlane/serial/DeleteSecondaryNode 18.68
105 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.65
106 TestMultiControlPlane/serial/StopCluster 241.38
107 TestMultiControlPlane/serial/RestartCluster 99.73
108 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.65
109 TestMultiControlPlane/serial/AddSecondaryNode 93.81
110 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.95
114 TestJSONOutput/start/Command 78.56
115 TestJSONOutput/start/Audit 0
117 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
118 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
120 TestJSONOutput/pause/Command 0.8
121 TestJSONOutput/pause/Audit 0
123 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
124 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
126 TestJSONOutput/unpause/Command 0.7
127 TestJSONOutput/unpause/Audit 0
129 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
130 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
132 TestJSONOutput/stop/Command 6.99
133 TestJSONOutput/stop/Audit 0
135 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
136 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
137 TestErrorJSONOutput 0.2
142 TestMainNoArgs 0.05
143 TestMinikubeProfile 85.3
146 TestMountStart/serial/StartWithMountFirst 24.38
147 TestMountStart/serial/VerifyMountFirst 0.38
148 TestMountStart/serial/StartWithMountSecond 23.4
149 TestMountStart/serial/VerifyMountSecond 0.38
150 TestMountStart/serial/DeleteFirst 0.71
151 TestMountStart/serial/VerifyMountPostDelete 0.39
152 TestMountStart/serial/Stop 1.37
153 TestMountStart/serial/RestartStopped 20.74
154 TestMountStart/serial/VerifyMountPostStop 0.39
157 TestMultiNode/serial/FreshStart2Nodes 127.31
158 TestMultiNode/serial/DeployApp2Nodes 5.47
159 TestMultiNode/serial/PingHostFrom2Pods 0.8
160 TestMultiNode/serial/AddNode 46.8
161 TestMultiNode/serial/MultiNodeLabels 0.07
162 TestMultiNode/serial/ProfileList 0.61
163 TestMultiNode/serial/CopyFile 7.41
164 TestMultiNode/serial/StopNode 2.51
165 TestMultiNode/serial/StartAfterStop 37.66
166 TestMultiNode/serial/RestartKeepsNodes 328.5
167 TestMultiNode/serial/DeleteNode 2.8
168 TestMultiNode/serial/StopMultiNode 159.32
169 TestMultiNode/serial/RestartMultiNode 87.64
170 TestMultiNode/serial/ValidateNameConflict 42.19
177 TestScheduledStopUnix 114.06
181 TestRunningBinaryUpgrade 165.15
183 TestKubernetesUpgrade 269.72
186 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
187 TestNoKubernetes/serial/StartWithK8s 85.33
196 TestPause/serial/Start 105.1
197 TestNoKubernetes/serial/StartWithStopK8s 48.99
198 TestNoKubernetes/serial/Start 23.9
199 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
200 TestNoKubernetes/serial/ProfileList 4.44
201 TestNoKubernetes/serial/Stop 1.49
202 TestNoKubernetes/serial/StartNoArgs 20.34
203 TestStoppedBinaryUpgrade/Setup 0.73
204 TestStoppedBinaryUpgrade/Upgrade 102.53
205 TestPause/serial/SecondStartNoReconfiguration 69.24
206 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
207 TestPause/serial/Pause 1.32
208 TestPause/serial/VerifyStatus 0.29
209 TestPause/serial/Unpause 0.9
210 TestPause/serial/PauseAgain 0.99
211 TestPause/serial/DeletePaused 0.88
212 TestPause/serial/VerifyDeletedResources 2.61
220 TestNetworkPlugins/group/false 3.87
224 TestStoppedBinaryUpgrade/MinikubeLogs 1.27
226 TestStartStop/group/old-k8s-version/serial/FirstStart 132.7
228 TestStartStop/group/no-preload/serial/FirstStart 142.5
230 TestStartStop/group/embed-certs/serial/FirstStart 97.35
231 TestStartStop/group/old-k8s-version/serial/DeployApp 10.35
232 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.18
233 TestStartStop/group/old-k8s-version/serial/Stop 88.89
234 TestStartStop/group/embed-certs/serial/DeployApp 10.28
235 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.03
236 TestStartStop/group/embed-certs/serial/Stop 87.28
237 TestStartStop/group/no-preload/serial/DeployApp 9.29
238 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1
239 TestStartStop/group/no-preload/serial/Stop 84.66
240 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
241 TestStartStop/group/old-k8s-version/serial/SecondStart 48.77
242 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
243 TestStartStop/group/embed-certs/serial/SecondStart 49.78
244 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.25
245 TestStartStop/group/no-preload/serial/SecondStart 74.7
246 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 10.01
247 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
248 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.34
249 TestStartStop/group/old-k8s-version/serial/Pause 3.64
250 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 10.01
252 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 63.11
253 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.24
254 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.35
255 TestStartStop/group/embed-certs/serial/Pause 4.65
257 TestStartStop/group/newest-cni/serial/FirstStart 57.89
258 TestNetworkPlugins/group/auto/Start 112.5
259 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
260 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.13
261 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.33
262 TestStartStop/group/no-preload/serial/Pause 4.25
263 TestNetworkPlugins/group/calico/Start 85.33
264 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.42
265 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.53
266 TestStartStop/group/default-k8s-diff-port/serial/Stop 82.83
267 TestStartStop/group/newest-cni/serial/DeployApp 0
268 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.03
269 TestStartStop/group/newest-cni/serial/Stop 10.97
270 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
271 TestStartStop/group/newest-cni/serial/SecondStart 37.65
272 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
273 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
274 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.37
275 TestStartStop/group/newest-cni/serial/Pause 3.91
276 TestNetworkPlugins/group/auto/KubeletFlags 0.25
277 TestNetworkPlugins/group/auto/NetCatPod 11.32
278 TestNetworkPlugins/group/custom-flannel/Start 73.81
279 TestNetworkPlugins/group/auto/DNS 0.15
280 TestNetworkPlugins/group/auto/Localhost 0.14
281 TestNetworkPlugins/group/auto/HairPin 0.14
282 TestNetworkPlugins/group/calico/ControllerPod 6.01
283 TestNetworkPlugins/group/calico/KubeletFlags 0.22
284 TestNetworkPlugins/group/calico/NetCatPod 11.39
285 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
286 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 45.9
287 TestNetworkPlugins/group/kindnet/Start 84.77
288 TestNetworkPlugins/group/calico/DNS 0.2
289 TestNetworkPlugins/group/calico/Localhost 0.17
290 TestNetworkPlugins/group/calico/HairPin 0.2
291 TestNetworkPlugins/group/flannel/Start 91.95
292 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 9.01
293 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.25
294 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.3
295 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
296 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
297 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.62
298 TestNetworkPlugins/group/custom-flannel/DNS 0.21
299 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
300 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
301 TestNetworkPlugins/group/enable-default-cni/Start 90.89
302 TestNetworkPlugins/group/bridge/Start 93.43
303 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
304 TestNetworkPlugins/group/kindnet/KubeletFlags 0.27
305 TestNetworkPlugins/group/kindnet/NetCatPod 13.31
306 TestNetworkPlugins/group/kindnet/DNS 0.17
307 TestNetworkPlugins/group/kindnet/Localhost 0.16
308 TestNetworkPlugins/group/kindnet/HairPin 0.15
309 TestNetworkPlugins/group/flannel/ControllerPod 6.01
310 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
311 TestNetworkPlugins/group/flannel/NetCatPod 10.3
312 TestNetworkPlugins/group/flannel/DNS 0.19
313 TestNetworkPlugins/group/flannel/Localhost 0.14
314 TestNetworkPlugins/group/flannel/HairPin 0.16
315 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
316 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.24
317 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
318 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
319 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
320 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
321 TestNetworkPlugins/group/bridge/NetCatPod 10.25
322 TestNetworkPlugins/group/bridge/DNS 0.15
323 TestNetworkPlugins/group/bridge/Localhost 0.12
324 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.28.0/json-events (7.59s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-010730 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-010730 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (7.590322472s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.59s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1017 18:56:20.372863   79439 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1017 18:56:20.373062   79439 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-75534/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-010730
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-010730: exit status 85 (64.678424ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-010730 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-010730 │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 18:56:12
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 18:56:12.824104   79450 out.go:360] Setting OutFile to fd 1 ...
	I1017 18:56:12.824361   79450 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:56:12.824371   79450 out.go:374] Setting ErrFile to fd 2...
	I1017 18:56:12.824375   79450 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:56:12.824605   79450 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-75534/.minikube/bin
	W1017 18:56:12.824736   79450 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21753-75534/.minikube/config/config.json: open /home/jenkins/minikube-integration/21753-75534/.minikube/config/config.json: no such file or directory
	I1017 18:56:12.825204   79450 out.go:368] Setting JSON to true
	I1017 18:56:12.826234   79450 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5924,"bootTime":1760721449,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 18:56:12.826338   79450 start.go:141] virtualization: kvm guest
	I1017 18:56:12.829204   79450 out.go:99] [download-only-010730] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 18:56:12.829340   79450 notify.go:220] Checking for updates...
	W1017 18:56:12.829367   79450 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21753-75534/.minikube/cache/preloaded-tarball: no such file or directory
	I1017 18:56:12.830581   79450 out.go:171] MINIKUBE_LOCATION=21753
	I1017 18:56:12.831906   79450 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 18:56:12.833317   79450 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21753-75534/kubeconfig
	I1017 18:56:12.834630   79450 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-75534/.minikube
	I1017 18:56:12.835957   79450 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1017 18:56:12.838391   79450 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1017 18:56:12.838685   79450 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 18:56:12.871958   79450 out.go:99] Using the kvm2 driver based on user configuration
	I1017 18:56:12.871996   79450 start.go:305] selected driver: kvm2
	I1017 18:56:12.872003   79450 start.go:925] validating driver "kvm2" against <nil>
	I1017 18:56:12.872376   79450 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 18:56:12.872467   79450 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21753-75534/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1017 18:56:12.886658   79450 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1017 18:56:12.886703   79450 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21753-75534/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1017 18:56:12.901427   79450 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1017 18:56:12.901477   79450 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 18:56:12.902118   79450 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1017 18:56:12.902288   79450 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1017 18:56:12.902316   79450 cni.go:84] Creating CNI manager for ""
	I1017 18:56:12.902363   79450 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1017 18:56:12.902374   79450 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1017 18:56:12.902423   79450 start.go:349] cluster config:
	{Name:download-only-010730 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-010730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 18:56:12.902637   79450 iso.go:125] acquiring lock: {Name:mk89d24a0bd9a0a8cf0564a4affa55e11eaff101 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 18:56:12.904601   79450 out.go:99] Downloading VM boot image ...
	I1017 18:56:12.904660   79450 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21753-75534/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1017 18:56:15.931448   79450 out.go:99] Starting "download-only-010730" primary control-plane node in "download-only-010730" cluster
	I1017 18:56:15.931481   79450 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1017 18:56:15.949440   79450 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1017 18:56:15.949477   79450 cache.go:58] Caching tarball of preloaded images
	I1017 18:56:15.949677   79450 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1017 18:56:15.951682   79450 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1017 18:56:15.951711   79450 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1017 18:56:15.972313   79450 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1017 18:56:15.972422   79450 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21753-75534/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-010730 host does not exist
	  To start a cluster, run: "minikube start -p download-only-010730"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-010730
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (5.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-361182 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-361182 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (5.211508954s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (5.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1017 18:56:25.939365   79439 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1017 18:56:25.939413   79439 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-75534/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-361182
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-361182: exit status 85 (60.308997ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-010730 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-010730 │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                               │ minikube             │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │ 17 Oct 25 18:56 UTC │
	│ delete  │ -p download-only-010730                                                                                                                                                                             │ download-only-010730 │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │ 17 Oct 25 18:56 UTC │
	│ start   │ -o=json --download-only -p download-only-361182 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-361182 │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 18:56:20
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 18:56:20.771025   79654 out.go:360] Setting OutFile to fd 1 ...
	I1017 18:56:20.771153   79654 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:56:20.771162   79654 out.go:374] Setting ErrFile to fd 2...
	I1017 18:56:20.771166   79654 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:56:20.771403   79654 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-75534/.minikube/bin
	I1017 18:56:20.771937   79654 out.go:368] Setting JSON to true
	I1017 18:56:20.772793   79654 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5932,"bootTime":1760721449,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 18:56:20.772895   79654 start.go:141] virtualization: kvm guest
	I1017 18:56:20.774728   79654 out.go:99] [download-only-361182] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 18:56:20.774923   79654 notify.go:220] Checking for updates...
	I1017 18:56:20.776259   79654 out.go:171] MINIKUBE_LOCATION=21753
	I1017 18:56:20.777521   79654 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 18:56:20.778582   79654 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21753-75534/kubeconfig
	I1017 18:56:20.779745   79654 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-75534/.minikube
	I1017 18:56:20.780938   79654 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1017 18:56:20.783020   79654 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1017 18:56:20.783281   79654 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 18:56:20.814056   79654 out.go:99] Using the kvm2 driver based on user configuration
	I1017 18:56:20.814087   79654 start.go:305] selected driver: kvm2
	I1017 18:56:20.814094   79654 start.go:925] validating driver "kvm2" against <nil>
	I1017 18:56:20.814404   79654 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 18:56:20.814512   79654 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21753-75534/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1017 18:56:20.828570   79654 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1017 18:56:20.828610   79654 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21753-75534/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1017 18:56:20.843932   79654 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1017 18:56:20.843981   79654 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 18:56:20.844574   79654 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1017 18:56:20.844728   79654 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1017 18:56:20.844753   79654 cni.go:84] Creating CNI manager for ""
	I1017 18:56:20.844805   79654 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1017 18:56:20.844814   79654 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1017 18:56:20.844871   79654 start.go:349] cluster config:
	{Name:download-only-361182 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-361182 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 18:56:20.844959   79654 iso.go:125] acquiring lock: {Name:mk89d24a0bd9a0a8cf0564a4affa55e11eaff101 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 18:56:20.846706   79654 out.go:99] Starting "download-only-361182" primary control-plane node in "download-only-361182" cluster
	I1017 18:56:20.846725   79654 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 18:56:20.865502   79654 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 18:56:20.865572   79654 cache.go:58] Caching tarball of preloaded images
	I1017 18:56:20.865738   79654 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 18:56:20.867659   79654 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1017 18:56:20.867697   79654 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1017 18:56:20.893716   79654 preload.go:290] Got checksum from GCS API "d1a46823b9241c5d38b5e0866197f2a8"
	I1017 18:56:20.893775   79654 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:d1a46823b9241c5d38b5e0866197f2a8 -> /home/jenkins/minikube-integration/21753-75534/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 18:56:25.481721   79654 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 18:56:25.482074   79654 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/download-only-361182/config.json ...
	I1017 18:56:25.482104   79654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/download-only-361182/config.json: {Name:mkd58d00c4cda74a3f4099c9640773044562748b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:56:25.482276   79654 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 18:56:25.482463   79654 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21753-75534/.minikube/cache/linux/amd64/v1.34.1/kubectl
	
	
	* The control-plane node download-only-361182 host does not exist
	  To start a cluster, run: "minikube start -p download-only-361182"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-361182
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.65s)

                                                
                                                
=== RUN   TestBinaryMirror
I1017 18:56:26.540982   79439 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-299360 --alsologtostderr --binary-mirror http://127.0.0.1:41563 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
helpers_test.go:175: Cleaning up "binary-mirror-299360" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-299360
--- PASS: TestBinaryMirror (0.65s)

                                                
                                    
x
+
TestOffline (62.63s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-000352 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-000352 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m1.733237991s)
helpers_test.go:175: Cleaning up "offline-crio-000352" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-000352
--- PASS: TestOffline (62.63s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-768633
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-768633: exit status 85 (54.427221ms)

                                                
                                                
-- stdout --
	* Profile "addons-768633" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-768633"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-768633
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-768633: exit status 85 (55.026609ms)

                                                
                                                
-- stdout --
	* Profile "addons-768633" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-768633"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (139.78s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-768633 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-768633 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m19.775517122s)
--- PASS: TestAddons/Setup (139.78s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-768633 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-768633 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.56s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-768633 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-768633 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a170d263-cd6b-4c5a-aff4-09a23f0f9b95] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a170d263-cd6b-4c5a-aff4-09a23f0f9b95] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004702012s
addons_test.go:694: (dbg) Run:  kubectl --context addons-768633 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-768633 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-768633 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.56s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.36s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 10.322996ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-hqf8t" [35c22bac-fb0b-47ed-a059-1d4ce279275b] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.332211413s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-v6ggf" [df1a7ef1-9163-4776-9a9c-ac545ca6ecc0] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.011210632s
addons_test.go:392: (dbg) Run:  kubectl --context addons-768633 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-768633 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-768633 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.840534496s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-768633 ip
2025/10/17 18:59:22 [DEBUG] GET http://192.168.39.150:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-768633 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.36s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.82s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 5.000179ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-768633
addons_test.go:332: (dbg) Run:  kubectl --context addons-768633 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-768633 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.82s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.63s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-xnjmh" [ab42516c-6fe5-434c-8040-ef81e89c7bc6] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.246858712s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-768633 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (5.63s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.7s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 10.084693ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-5fqt5" [69ada210-4511-40aa-b098-0b90b6815015] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.334419184s
addons_test.go:463: (dbg) Run:  kubectl --context addons-768633 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-768633 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-768633 addons disable metrics-server --alsologtostderr -v=1: (1.265968384s)
--- PASS: TestAddons/parallel/MetricsServer (6.70s)

                                                
                                    
x
+
TestAddons/parallel/CSI (55.46s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1017 18:59:24.294796   79439 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1017 18:59:24.301179   79439 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1017 18:59:24.301209   79439 kapi.go:107] duration metric: took 6.416517ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 6.438159ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-768633 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768633 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768633 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768633 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768633 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768633 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768633 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-768633 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [9a523369-c68e-479a-acf4-f58b0baacaa8] Pending
helpers_test.go:352: "task-pv-pod" [9a523369-c68e-479a-acf4-f58b0baacaa8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [9a523369-c68e-479a-acf4-f58b0baacaa8] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.003457671s
addons_test.go:572: (dbg) Run:  kubectl --context addons-768633 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-768633 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-768633 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-768633 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-768633 delete pod task-pv-pod: (1.346080027s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-768633 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-768633 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768633 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768633 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768633 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768633 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768633 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768633 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768633 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768633 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768633 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768633 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768633 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768633 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768633 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768633 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768633 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768633 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768633 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768633 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-768633 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [065cdf8a-585b-402d-8eba-36e8abc33785] Pending
helpers_test.go:352: "task-pv-pod-restore" [065cdf8a-585b-402d-8eba-36e8abc33785] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [065cdf8a-585b-402d-8eba-36e8abc33785] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.007063046s
addons_test.go:614: (dbg) Run:  kubectl --context addons-768633 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-768633 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-768633 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-768633 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-768633 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-768633 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.05740321s)
--- PASS: TestAddons/parallel/CSI (55.46s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (22.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-768633 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-qpqvb" [c58f10ef-c548-477c-ba63-d51089ba8423] Pending
helpers_test.go:352: "headlamp-6945c6f4d-qpqvb" [c58f10ef-c548-477c-ba63-d51089ba8423] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-qpqvb" [c58f10ef-c548-477c-ba63-d51089ba8423] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-qpqvb" [c58f10ef-c548-477c-ba63-d51089ba8423] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 16.003546809s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-768633 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-768633 addons disable headlamp --alsologtostderr -v=1: (5.951741983s)
--- PASS: TestAddons/parallel/Headlamp (22.91s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.2s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-8l26t" [d817f860-626c-407e-9ede-cd0e86466ebe] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.010028774s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-768633 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-768633 addons disable cloud-spanner --alsologtostderr -v=1: (1.182918523s)
--- PASS: TestAddons/parallel/CloudSpanner (6.20s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.66s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-768633 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-768633 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768633 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768633 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768633 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768633 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768633 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-768633 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [f261589e-e6e6-4f2c-8d34-3b47bf9880ae] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [f261589e-e6e6-4f2c-8d34-3b47bf9880ae] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [f261589e-e6e6-4f2c-8d34-3b47bf9880ae] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.00812543s
addons_test.go:967: (dbg) Run:  kubectl --context addons-768633 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-768633 ssh "cat /opt/local-path-provisioner/pvc-ecc24882-93ae-4db8-b0d0-e3db34be0b9b_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-768633 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-768633 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-768633 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-768633 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.69037846s)
--- PASS: TestAddons/parallel/LocalPath (53.66s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.12s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-rk98j" [ba1839a9-838a-471c-bad5-74ae4ea0fbab] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.3422796s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-768633 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.12s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-6424b" [64ec109c-ccaa-416e-948b-3e9405739923] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.094087983s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-768633 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-768633 addons disable yakd --alsologtostderr -v=1: (6.423919545s)
--- PASS: TestAddons/parallel/Yakd (11.52s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (88.44s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-768633
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-768633: (1m28.149860131s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-768633
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-768633
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-768633
--- PASS: TestAddons/StoppedEnableDisable (88.44s)

                                                
                                    
x
+
TestCertOptions (64.73s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-296467 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-296467 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m3.175754604s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-296467 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-296467 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-296467 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-296467" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-296467
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-296467: (1.033972102s)
--- PASS: TestCertOptions (64.73s)

                                                
                                    
x
+
TestCertExpiration (320.38s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-392470 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-392470 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m11.599385917s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-392470 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-392470 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m7.555828689s)
helpers_test.go:175: Cleaning up "cert-expiration-392470" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-392470
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-392470: (1.2199627s)
--- PASS: TestCertExpiration (320.38s)

                                                
                                    
x
+
TestForceSystemdFlag (71.47s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-449009 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-449009 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m10.128784415s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-449009 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-449009" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-449009
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-449009: (1.04178382s)
--- PASS: TestForceSystemdFlag (71.47s)

                                                
                                    
x
+
TestForceSystemdEnv (44.73s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-568020 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
I1017 20:35:21.117620   79439 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1017 20:35:21.117790   79439 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate1586725529/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1017 20:35:21.147984   79439 install.go:163] /tmp/TestKVMDriverInstallOrUpdate1586725529/001/docker-machine-driver-kvm2 version is 1.1.1
W1017 20:35:21.148025   79439 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W1017 20:35:21.148206   79439 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1017 20:35:21.148253   79439 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1586725529/001/docker-machine-driver-kvm2
I1017 20:35:21.526049   79439 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate1586725529/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1017 20:35:21.541537   79439 install.go:163] /tmp/TestKVMDriverInstallOrUpdate1586725529/001/docker-machine-driver-kvm2 version is 1.37.0
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-568020 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (43.388100778s)
helpers_test.go:175: Cleaning up "force-systemd-env-568020" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-568020
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-568020: (1.345894281s)
--- PASS: TestForceSystemdEnv (44.73s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0.56s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (0.56s)

                                                
                                    
x
+
TestErrorSpam/setup (38.85s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-712449 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-712449 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1017 19:03:47.742026   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:03:47.750872   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:03:47.762822   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:03:47.784236   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:03:47.825650   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:03:47.907186   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:03:48.068764   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:03:48.390464   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:03:49.032506   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:03:50.313961   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:03:52.876785   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:03:57.998115   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-712449 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-712449 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (38.85338462s)
--- PASS: TestErrorSpam/setup (38.85s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-712449 --log_dir /tmp/nospam-712449 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-712449 --log_dir /tmp/nospam-712449 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-712449 --log_dir /tmp/nospam-712449 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-712449 --log_dir /tmp/nospam-712449 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-712449 --log_dir /tmp/nospam-712449 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-712449 --log_dir /tmp/nospam-712449 status
--- PASS: TestErrorSpam/status (0.80s)

                                                
                                    
x
+
TestErrorSpam/pause (1.74s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-712449 --log_dir /tmp/nospam-712449 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-712449 --log_dir /tmp/nospam-712449 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-712449 --log_dir /tmp/nospam-712449 pause
--- PASS: TestErrorSpam/pause (1.74s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.99s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-712449 --log_dir /tmp/nospam-712449 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-712449 --log_dir /tmp/nospam-712449 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-712449 --log_dir /tmp/nospam-712449 unpause
--- PASS: TestErrorSpam/unpause (1.99s)

                                                
                                    
x
+
TestErrorSpam/stop (92.01s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-712449 --log_dir /tmp/nospam-712449 stop
E1017 19:04:08.240339   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:04:28.722389   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:05:09.685429   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-712449 --log_dir /tmp/nospam-712449 stop: (1m28.16348675s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-712449 --log_dir /tmp/nospam-712449 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-712449 --log_dir /tmp/nospam-712449 stop: (1.793990445s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-712449 --log_dir /tmp/nospam-712449 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-712449 --log_dir /tmp/nospam-712449 stop: (2.051110586s)
--- PASS: TestErrorSpam/stop (92.01s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21753-75534/.minikube/files/etc/test/nested/copy/79439/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (78.06s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-016863 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1017 19:06:31.610520   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-016863 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m18.06120189s)
--- PASS: TestFunctional/serial/StartWithProxy (78.06s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-016863 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-016863 cache add registry.k8s.io/pause:3.1: (1.094655382s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-016863 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-016863 cache add registry.k8s.io/pause:3.3: (1.120415499s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-016863 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-016863 cache add registry.k8s.io/pause:latest: (1.081933297s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-016863 /tmp/TestFunctionalserialCacheCmdcacheadd_local3361632044/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-016863 cache add minikube-local-cache-test:functional-016863
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-016863 cache delete minikube-local-cache-test:functional-016863
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-016863
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-016863 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-016863 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-016863 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-016863 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (211.488049ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-016863 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-016863 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Non-zero exit: docker rmi -f kicbase/echo-server:1.0: context deadline exceeded (934ns)
functional_test.go:207: failed to remove image "kicbase/echo-server:1.0" from docker images. args "docker rmi -f kicbase/echo-server:1.0": context deadline exceeded
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-016863
functional_test.go:205: (dbg) Non-zero exit: docker rmi -f kicbase/echo-server:functional-016863: context deadline exceeded (364ns)
functional_test.go:207: failed to remove image "kicbase/echo-server:functional-016863" from docker images. args "docker rmi -f kicbase/echo-server:functional-016863": context deadline exceeded
--- PASS: TestFunctional/delete_echo-server_images (0.00s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-016863
functional_test.go:213: (dbg) Non-zero exit: docker rmi -f localhost/my-image:functional-016863: context deadline exceeded (448ns)
functional_test.go:215: failed to remove image my-image from docker images. args "docker rmi -f localhost/my-image:functional-016863": context deadline exceeded
--- PASS: TestFunctional/delete_my-image_image (0.00s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-016863
functional_test.go:221: (dbg) Non-zero exit: docker rmi -f minikube-local-cache-test:functional-016863: context deadline exceeded (343ns)
functional_test.go:223: failed to remove image minikube local cache test images from docker. args "docker rmi -f minikube-local-cache-test:functional-016863": context deadline exceeded
--- PASS: TestFunctional/delete_minikube_cached_images (0.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (207.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1017 19:48:47.751028   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-852523 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (3m27.120467591s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (207.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-852523 kubectl -- rollout status deployment/busybox: (5.104546696s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 kubectl -- exec busybox-7b57f96db7-57gnt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 kubectl -- exec busybox-7b57f96db7-d2qz5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 kubectl -- exec busybox-7b57f96db7-xf9fz -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 kubectl -- exec busybox-7b57f96db7-57gnt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 kubectl -- exec busybox-7b57f96db7-d2qz5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 kubectl -- exec busybox-7b57f96db7-xf9fz -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 kubectl -- exec busybox-7b57f96db7-57gnt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 kubectl -- exec busybox-7b57f96db7-d2qz5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 kubectl -- exec busybox-7b57f96db7-xf9fz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 kubectl -- exec busybox-7b57f96db7-57gnt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 kubectl -- exec busybox-7b57f96db7-57gnt -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 kubectl -- exec busybox-7b57f96db7-d2qz5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 kubectl -- exec busybox-7b57f96db7-d2qz5 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 kubectl -- exec busybox-7b57f96db7-xf9fz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 kubectl -- exec busybox-7b57f96db7-xf9fz -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (75.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-852523 node add --alsologtostderr -v 5: (1m14.730320366s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (75.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-852523 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 cp testdata/cp-test.txt ha-852523:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 ssh -n ha-852523 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 cp ha-852523:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2826607159/001/cp-test_ha-852523.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 ssh -n ha-852523 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 cp ha-852523:/home/docker/cp-test.txt ha-852523-m02:/home/docker/cp-test_ha-852523_ha-852523-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 ssh -n ha-852523 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 ssh -n ha-852523-m02 "sudo cat /home/docker/cp-test_ha-852523_ha-852523-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 cp ha-852523:/home/docker/cp-test.txt ha-852523-m03:/home/docker/cp-test_ha-852523_ha-852523-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 ssh -n ha-852523 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 ssh -n ha-852523-m03 "sudo cat /home/docker/cp-test_ha-852523_ha-852523-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 cp ha-852523:/home/docker/cp-test.txt ha-852523-m04:/home/docker/cp-test_ha-852523_ha-852523-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 ssh -n ha-852523 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 ssh -n ha-852523-m04 "sudo cat /home/docker/cp-test_ha-852523_ha-852523-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 cp testdata/cp-test.txt ha-852523-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 ssh -n ha-852523-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 cp ha-852523-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2826607159/001/cp-test_ha-852523-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 ssh -n ha-852523-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 cp ha-852523-m02:/home/docker/cp-test.txt ha-852523:/home/docker/cp-test_ha-852523-m02_ha-852523.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 ssh -n ha-852523-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 ssh -n ha-852523 "sudo cat /home/docker/cp-test_ha-852523-m02_ha-852523.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 cp ha-852523-m02:/home/docker/cp-test.txt ha-852523-m03:/home/docker/cp-test_ha-852523-m02_ha-852523-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 ssh -n ha-852523-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 ssh -n ha-852523-m03 "sudo cat /home/docker/cp-test_ha-852523-m02_ha-852523-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 cp ha-852523-m02:/home/docker/cp-test.txt ha-852523-m04:/home/docker/cp-test_ha-852523-m02_ha-852523-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 ssh -n ha-852523-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 ssh -n ha-852523-m04 "sudo cat /home/docker/cp-test_ha-852523-m02_ha-852523-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 cp testdata/cp-test.txt ha-852523-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 ssh -n ha-852523-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 cp ha-852523-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2826607159/001/cp-test_ha-852523-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 ssh -n ha-852523-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 cp ha-852523-m03:/home/docker/cp-test.txt ha-852523:/home/docker/cp-test_ha-852523-m03_ha-852523.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 ssh -n ha-852523-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 ssh -n ha-852523 "sudo cat /home/docker/cp-test_ha-852523-m03_ha-852523.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 cp ha-852523-m03:/home/docker/cp-test.txt ha-852523-m02:/home/docker/cp-test_ha-852523-m03_ha-852523-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 ssh -n ha-852523-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 ssh -n ha-852523-m02 "sudo cat /home/docker/cp-test_ha-852523-m03_ha-852523-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 cp ha-852523-m03:/home/docker/cp-test.txt ha-852523-m04:/home/docker/cp-test_ha-852523-m03_ha-852523-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 ssh -n ha-852523-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 ssh -n ha-852523-m04 "sudo cat /home/docker/cp-test_ha-852523-m03_ha-852523-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 cp testdata/cp-test.txt ha-852523-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 ssh -n ha-852523-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 cp ha-852523-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2826607159/001/cp-test_ha-852523-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 ssh -n ha-852523-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 cp ha-852523-m04:/home/docker/cp-test.txt ha-852523:/home/docker/cp-test_ha-852523-m04_ha-852523.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 ssh -n ha-852523-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 ssh -n ha-852523 "sudo cat /home/docker/cp-test_ha-852523-m04_ha-852523.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 cp ha-852523-m04:/home/docker/cp-test.txt ha-852523-m02:/home/docker/cp-test_ha-852523-m04_ha-852523-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 ssh -n ha-852523-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 ssh -n ha-852523-m02 "sudo cat /home/docker/cp-test_ha-852523-m04_ha-852523-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 cp ha-852523-m04:/home/docker/cp-test.txt ha-852523-m03:/home/docker/cp-test_ha-852523-m04_ha-852523-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 ssh -n ha-852523-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 ssh -n ha-852523-m03 "sudo cat /home/docker/cp-test_ha-852523-m04_ha-852523-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (84.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 node stop m02 --alsologtostderr -v 5
E1017 19:53:30.819319   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:53:47.750216   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-852523 node stop m02 --alsologtostderr -v 5: (1m23.640950779s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-852523 status --alsologtostderr -v 5: exit status 7 (711.414882ms)

                                                
                                                
-- stdout --
	ha-852523
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-852523-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-852523-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-852523-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:53:52.982044   99232 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:53:52.982370   99232 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:53:52.982382   99232 out.go:374] Setting ErrFile to fd 2...
	I1017 19:53:52.982386   99232 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:53:52.982605   99232 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-75534/.minikube/bin
	I1017 19:53:52.982794   99232 out.go:368] Setting JSON to false
	I1017 19:53:52.982824   99232 mustload.go:65] Loading cluster: ha-852523
	I1017 19:53:52.982952   99232 notify.go:220] Checking for updates...
	I1017 19:53:52.983215   99232 config.go:182] Loaded profile config "ha-852523": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:53:52.983230   99232 status.go:174] checking status of ha-852523 ...
	I1017 19:53:52.983656   99232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:53:52.983688   99232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:53:53.005953   99232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35635
	I1017 19:53:53.006494   99232 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:53:53.007105   99232 main.go:141] libmachine: Using API Version  1
	I1017 19:53:53.007133   99232 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:53:53.007596   99232 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:53:53.007843   99232 main.go:141] libmachine: (ha-852523) Calling .GetState
	I1017 19:53:53.009881   99232 status.go:371] ha-852523 host status = "Running" (err=<nil>)
	I1017 19:53:53.009898   99232 host.go:66] Checking if "ha-852523" exists ...
	I1017 19:53:53.010195   99232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:53:53.010232   99232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:53:53.025729   99232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36239
	I1017 19:53:53.026251   99232 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:53:53.026864   99232 main.go:141] libmachine: Using API Version  1
	I1017 19:53:53.026896   99232 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:53:53.027324   99232 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:53:53.027719   99232 main.go:141] libmachine: (ha-852523) Calling .GetIP
	I1017 19:53:53.032224   99232 main.go:141] libmachine: (ha-852523) DBG | domain ha-852523 has defined MAC address 52:54:00:20:68:b6 in network mk-ha-852523
	I1017 19:53:53.032923   99232 main.go:141] libmachine: (ha-852523) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:68:b6", ip: ""} in network mk-ha-852523: {Iface:virbr1 ExpiryTime:2025-10-17 20:47:39 +0000 UTC Type:0 Mac:52:54:00:20:68:b6 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:ha-852523 Clientid:01:52:54:00:20:68:b6}
	I1017 19:53:53.032960   99232 main.go:141] libmachine: (ha-852523) DBG | domain ha-852523 has defined IP address 192.168.39.218 and MAC address 52:54:00:20:68:b6 in network mk-ha-852523
	I1017 19:53:53.033189   99232 host.go:66] Checking if "ha-852523" exists ...
	I1017 19:53:53.033514   99232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:53:53.033583   99232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:53:53.047729   99232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32913
	I1017 19:53:53.048337   99232 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:53:53.048943   99232 main.go:141] libmachine: Using API Version  1
	I1017 19:53:53.048970   99232 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:53:53.049351   99232 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:53:53.049763   99232 main.go:141] libmachine: (ha-852523) Calling .DriverName
	I1017 19:53:53.050031   99232 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:53:53.050061   99232 main.go:141] libmachine: (ha-852523) Calling .GetSSHHostname
	I1017 19:53:53.053871   99232 main.go:141] libmachine: (ha-852523) DBG | domain ha-852523 has defined MAC address 52:54:00:20:68:b6 in network mk-ha-852523
	I1017 19:53:53.054547   99232 main.go:141] libmachine: (ha-852523) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:68:b6", ip: ""} in network mk-ha-852523: {Iface:virbr1 ExpiryTime:2025-10-17 20:47:39 +0000 UTC Type:0 Mac:52:54:00:20:68:b6 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:ha-852523 Clientid:01:52:54:00:20:68:b6}
	I1017 19:53:53.054585   99232 main.go:141] libmachine: (ha-852523) DBG | domain ha-852523 has defined IP address 192.168.39.218 and MAC address 52:54:00:20:68:b6 in network mk-ha-852523
	I1017 19:53:53.054822   99232 main.go:141] libmachine: (ha-852523) Calling .GetSSHPort
	I1017 19:53:53.055030   99232 main.go:141] libmachine: (ha-852523) Calling .GetSSHKeyPath
	I1017 19:53:53.055203   99232 main.go:141] libmachine: (ha-852523) Calling .GetSSHUsername
	I1017 19:53:53.055371   99232 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/ha-852523/id_rsa Username:docker}
	I1017 19:53:53.148696   99232 ssh_runner.go:195] Run: systemctl --version
	I1017 19:53:53.157567   99232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:53:53.178195   99232 kubeconfig.go:125] found "ha-852523" server: "https://192.168.39.254:8443"
	I1017 19:53:53.178237   99232 api_server.go:166] Checking apiserver status ...
	I1017 19:53:53.178286   99232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:53:53.206224   99232 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1427/cgroup
	W1017 19:53:53.220283   99232 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1427/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:53:53.220341   99232 ssh_runner.go:195] Run: ls
	I1017 19:53:53.225882   99232 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1017 19:53:53.233261   99232 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1017 19:53:53.233285   99232 status.go:463] ha-852523 apiserver status = Running (err=<nil>)
	I1017 19:53:53.233295   99232 status.go:176] ha-852523 status: &{Name:ha-852523 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 19:53:53.233323   99232 status.go:174] checking status of ha-852523-m02 ...
	I1017 19:53:53.233656   99232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:53:53.233700   99232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:53:53.248668   99232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45763
	I1017 19:53:53.249336   99232 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:53:53.249844   99232 main.go:141] libmachine: Using API Version  1
	I1017 19:53:53.249869   99232 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:53:53.250238   99232 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:53:53.250400   99232 main.go:141] libmachine: (ha-852523-m02) Calling .GetState
	I1017 19:53:53.252310   99232 status.go:371] ha-852523-m02 host status = "Stopped" (err=<nil>)
	I1017 19:53:53.252324   99232 status.go:384] host is not running, skipping remaining checks
	I1017 19:53:53.252330   99232 status.go:176] ha-852523-m02 status: &{Name:ha-852523-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 19:53:53.252349   99232 status.go:174] checking status of ha-852523-m03 ...
	I1017 19:53:53.252729   99232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:53:53.252774   99232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:53:53.267759   99232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42263
	I1017 19:53:53.268278   99232 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:53:53.268861   99232 main.go:141] libmachine: Using API Version  1
	I1017 19:53:53.268908   99232 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:53:53.269199   99232 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:53:53.269402   99232 main.go:141] libmachine: (ha-852523-m03) Calling .GetState
	I1017 19:53:53.271255   99232 status.go:371] ha-852523-m03 host status = "Running" (err=<nil>)
	I1017 19:53:53.271274   99232 host.go:66] Checking if "ha-852523-m03" exists ...
	I1017 19:53:53.271609   99232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:53:53.271675   99232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:53:53.285081   99232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36523
	I1017 19:53:53.285534   99232 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:53:53.286025   99232 main.go:141] libmachine: Using API Version  1
	I1017 19:53:53.286052   99232 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:53:53.286382   99232 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:53:53.286610   99232 main.go:141] libmachine: (ha-852523-m03) Calling .GetIP
	I1017 19:53:53.289389   99232 main.go:141] libmachine: (ha-852523-m03) DBG | domain ha-852523-m03 has defined MAC address 52:54:00:2e:53:23 in network mk-ha-852523
	I1017 19:53:53.289846   99232 main.go:141] libmachine: (ha-852523-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:53:23", ip: ""} in network mk-ha-852523: {Iface:virbr1 ExpiryTime:2025-10-17 20:49:46 +0000 UTC Type:0 Mac:52:54:00:2e:53:23 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-852523-m03 Clientid:01:52:54:00:2e:53:23}
	I1017 19:53:53.289893   99232 main.go:141] libmachine: (ha-852523-m03) DBG | domain ha-852523-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:2e:53:23 in network mk-ha-852523
	I1017 19:53:53.290037   99232 host.go:66] Checking if "ha-852523-m03" exists ...
	I1017 19:53:53.290341   99232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:53:53.290384   99232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:53:53.304865   99232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44949
	I1017 19:53:53.305422   99232 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:53:53.306025   99232 main.go:141] libmachine: Using API Version  1
	I1017 19:53:53.306054   99232 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:53:53.306622   99232 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:53:53.306856   99232 main.go:141] libmachine: (ha-852523-m03) Calling .DriverName
	I1017 19:53:53.307182   99232 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:53:53.307211   99232 main.go:141] libmachine: (ha-852523-m03) Calling .GetSSHHostname
	I1017 19:53:53.310870   99232 main.go:141] libmachine: (ha-852523-m03) DBG | domain ha-852523-m03 has defined MAC address 52:54:00:2e:53:23 in network mk-ha-852523
	I1017 19:53:53.311426   99232 main.go:141] libmachine: (ha-852523-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:53:23", ip: ""} in network mk-ha-852523: {Iface:virbr1 ExpiryTime:2025-10-17 20:49:46 +0000 UTC Type:0 Mac:52:54:00:2e:53:23 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-852523-m03 Clientid:01:52:54:00:2e:53:23}
	I1017 19:53:53.311450   99232 main.go:141] libmachine: (ha-852523-m03) DBG | domain ha-852523-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:2e:53:23 in network mk-ha-852523
	I1017 19:53:53.311619   99232 main.go:141] libmachine: (ha-852523-m03) Calling .GetSSHPort
	I1017 19:53:53.311797   99232 main.go:141] libmachine: (ha-852523-m03) Calling .GetSSHKeyPath
	I1017 19:53:53.311982   99232 main.go:141] libmachine: (ha-852523-m03) Calling .GetSSHUsername
	I1017 19:53:53.312135   99232 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/ha-852523-m03/id_rsa Username:docker}
	I1017 19:53:53.401337   99232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:53:53.426385   99232 kubeconfig.go:125] found "ha-852523" server: "https://192.168.39.254:8443"
	I1017 19:53:53.426417   99232 api_server.go:166] Checking apiserver status ...
	I1017 19:53:53.426456   99232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:53:53.451323   99232 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1777/cgroup
	W1017 19:53:53.466074   99232 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1777/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:53:53.466144   99232 ssh_runner.go:195] Run: ls
	I1017 19:53:53.472160   99232 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1017 19:53:53.477112   99232 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1017 19:53:53.477142   99232 status.go:463] ha-852523-m03 apiserver status = Running (err=<nil>)
	I1017 19:53:53.477151   99232 status.go:176] ha-852523-m03 status: &{Name:ha-852523-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 19:53:53.477191   99232 status.go:174] checking status of ha-852523-m04 ...
	I1017 19:53:53.477490   99232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:53:53.477530   99232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:53:53.491012   99232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36577
	I1017 19:53:53.491511   99232 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:53:53.491986   99232 main.go:141] libmachine: Using API Version  1
	I1017 19:53:53.492008   99232 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:53:53.492379   99232 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:53:53.492600   99232 main.go:141] libmachine: (ha-852523-m04) Calling .GetState
	I1017 19:53:53.494302   99232 status.go:371] ha-852523-m04 host status = "Running" (err=<nil>)
	I1017 19:53:53.494317   99232 host.go:66] Checking if "ha-852523-m04" exists ...
	I1017 19:53:53.494617   99232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:53:53.494697   99232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:53:53.509498   99232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46803
	I1017 19:53:53.510038   99232 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:53:53.510476   99232 main.go:141] libmachine: Using API Version  1
	I1017 19:53:53.510504   99232 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:53:53.510896   99232 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:53:53.511099   99232 main.go:141] libmachine: (ha-852523-m04) Calling .GetIP
	I1017 19:53:53.514224   99232 main.go:141] libmachine: (ha-852523-m04) DBG | domain ha-852523-m04 has defined MAC address 52:54:00:8a:a3:85 in network mk-ha-852523
	I1017 19:53:53.514678   99232 main.go:141] libmachine: (ha-852523-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a3:85", ip: ""} in network mk-ha-852523: {Iface:virbr1 ExpiryTime:2025-10-17 20:51:16 +0000 UTC Type:0 Mac:52:54:00:8a:a3:85 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-852523-m04 Clientid:01:52:54:00:8a:a3:85}
	I1017 19:53:53.514703   99232 main.go:141] libmachine: (ha-852523-m04) DBG | domain ha-852523-m04 has defined IP address 192.168.39.168 and MAC address 52:54:00:8a:a3:85 in network mk-ha-852523
	I1017 19:53:53.514855   99232 host.go:66] Checking if "ha-852523-m04" exists ...
	I1017 19:53:53.515155   99232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:53:53.515192   99232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:53:53.529254   99232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46679
	I1017 19:53:53.529900   99232 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:53:53.530516   99232 main.go:141] libmachine: Using API Version  1
	I1017 19:53:53.530542   99232 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:53:53.530930   99232 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:53:53.531200   99232 main.go:141] libmachine: (ha-852523-m04) Calling .DriverName
	I1017 19:53:53.531457   99232 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:53:53.531479   99232 main.go:141] libmachine: (ha-852523-m04) Calling .GetSSHHostname
	I1017 19:53:53.535316   99232 main.go:141] libmachine: (ha-852523-m04) DBG | domain ha-852523-m04 has defined MAC address 52:54:00:8a:a3:85 in network mk-ha-852523
	I1017 19:53:53.535891   99232 main.go:141] libmachine: (ha-852523-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a3:85", ip: ""} in network mk-ha-852523: {Iface:virbr1 ExpiryTime:2025-10-17 20:51:16 +0000 UTC Type:0 Mac:52:54:00:8a:a3:85 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-852523-m04 Clientid:01:52:54:00:8a:a3:85}
	I1017 19:53:53.535920   99232 main.go:141] libmachine: (ha-852523-m04) DBG | domain ha-852523-m04 has defined IP address 192.168.39.168 and MAC address 52:54:00:8a:a3:85 in network mk-ha-852523
	I1017 19:53:53.536157   99232 main.go:141] libmachine: (ha-852523-m04) Calling .GetSSHPort
	I1017 19:53:53.536350   99232 main.go:141] libmachine: (ha-852523-m04) Calling .GetSSHKeyPath
	I1017 19:53:53.536531   99232 main.go:141] libmachine: (ha-852523-m04) Calling .GetSSHUsername
	I1017 19:53:53.536742   99232 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/ha-852523-m04/id_rsa Username:docker}
	I1017 19:53:53.623151   99232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:53:53.642507   99232 status.go:176] ha-852523-m04 status: &{Name:ha-852523-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (84.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (36.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-852523 node start m02 --alsologtostderr -v 5: (35.794792347s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-852523 status --alsologtostderr -v 5: (1.018432491s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (36.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (392.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 stop --alsologtostderr -v 5
E1017 19:58:47.750678   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-852523 stop --alsologtostderr -v 5: (4m19.154921525s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 start --wait true --alsologtostderr -v 5
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-852523 start --wait true --alsologtostderr -v 5: (2m12.968914349s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (392.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-852523 node delete m03 --alsologtostderr -v 5: (17.842622888s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (241.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 stop --alsologtostderr -v 5
E1017 20:03:47.742875   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-852523 stop --alsologtostderr -v 5: (4m1.27302741s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-852523 status --alsologtostderr -v 5: exit status 7 (109.865903ms)

                                                
                                                
-- stdout --
	ha-852523
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-852523-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-852523-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:05:25.074039  103156 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:05:25.074303  103156 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:05:25.074312  103156 out.go:374] Setting ErrFile to fd 2...
	I1017 20:05:25.074315  103156 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:05:25.074519  103156 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-75534/.minikube/bin
	I1017 20:05:25.074712  103156 out.go:368] Setting JSON to false
	I1017 20:05:25.074740  103156 mustload.go:65] Loading cluster: ha-852523
	I1017 20:05:25.074871  103156 notify.go:220] Checking for updates...
	I1017 20:05:25.075155  103156 config.go:182] Loaded profile config "ha-852523": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:05:25.075175  103156 status.go:174] checking status of ha-852523 ...
	I1017 20:05:25.075731  103156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 20:05:25.075776  103156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 20:05:25.093941  103156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38041
	I1017 20:05:25.094459  103156 main.go:141] libmachine: () Calling .GetVersion
	I1017 20:05:25.095035  103156 main.go:141] libmachine: Using API Version  1
	I1017 20:05:25.095059  103156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 20:05:25.095566  103156 main.go:141] libmachine: () Calling .GetMachineName
	I1017 20:05:25.095791  103156 main.go:141] libmachine: (ha-852523) Calling .GetState
	I1017 20:05:25.097615  103156 status.go:371] ha-852523 host status = "Stopped" (err=<nil>)
	I1017 20:05:25.097632  103156 status.go:384] host is not running, skipping remaining checks
	I1017 20:05:25.097639  103156 status.go:176] ha-852523 status: &{Name:ha-852523 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 20:05:25.097686  103156 status.go:174] checking status of ha-852523-m02 ...
	I1017 20:05:25.098022  103156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 20:05:25.098075  103156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 20:05:25.111450  103156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46389
	I1017 20:05:25.111999  103156 main.go:141] libmachine: () Calling .GetVersion
	I1017 20:05:25.112558  103156 main.go:141] libmachine: Using API Version  1
	I1017 20:05:25.112588  103156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 20:05:25.113062  103156 main.go:141] libmachine: () Calling .GetMachineName
	I1017 20:05:25.113258  103156 main.go:141] libmachine: (ha-852523-m02) Calling .GetState
	I1017 20:05:25.115341  103156 status.go:371] ha-852523-m02 host status = "Stopped" (err=<nil>)
	I1017 20:05:25.115359  103156 status.go:384] host is not running, skipping remaining checks
	I1017 20:05:25.115366  103156 status.go:176] ha-852523-m02 status: &{Name:ha-852523-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 20:05:25.115385  103156 status.go:174] checking status of ha-852523-m04 ...
	I1017 20:05:25.115701  103156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 20:05:25.115746  103156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 20:05:25.128963  103156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44171
	I1017 20:05:25.129404  103156 main.go:141] libmachine: () Calling .GetVersion
	I1017 20:05:25.129867  103156 main.go:141] libmachine: Using API Version  1
	I1017 20:05:25.129892  103156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 20:05:25.130315  103156 main.go:141] libmachine: () Calling .GetMachineName
	I1017 20:05:25.130536  103156 main.go:141] libmachine: (ha-852523-m04) Calling .GetState
	I1017 20:05:25.132708  103156 status.go:371] ha-852523-m04 host status = "Stopped" (err=<nil>)
	I1017 20:05:25.132723  103156 status.go:384] host is not running, skipping remaining checks
	I1017 20:05:25.132729  103156 status.go:176] ha-852523-m04 status: &{Name:ha-852523-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (241.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (99.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-852523 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m38.948546651s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (99.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (93.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-852523 node add --control-plane --alsologtostderr -v 5: (1m32.899359576s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-852523 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (93.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.95s)

                                                
                                    
x
+
TestJSONOutput/start/Command (78.56s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-682704 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1017 20:08:47.745102   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-682704 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m18.560608544s)
--- PASS: TestJSONOutput/start/Command (78.56s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.8s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-682704 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.80s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-682704 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.99s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-682704 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-682704 --output=json --user=testUser: (6.991570687s)
--- PASS: TestJSONOutput/stop/Command (6.99s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-957969 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-957969 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (65.700238ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"87b17860-287c-4df7-900d-d604fd139c49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-957969] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b05b062b-6e3a-4293-b6fc-2ea3a67a9ed3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21753"}}
	{"specversion":"1.0","id":"87b18945-87ab-4e54-b9b9-b4ac17c1a451","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"091058ba-b8e2-44bf-b3f4-34f36a89952b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21753-75534/kubeconfig"}}
	{"specversion":"1.0","id":"f49d4c7c-30d6-4d3e-9915-0259ce18bd86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-75534/.minikube"}}
	{"specversion":"1.0","id":"48f0be28-3c48-4735-b6c3-960bb92fefbe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"41d55e2e-6e72-4d24-bcb9-c8104cff0699","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c9375d4b-e875-4321-8de1-ec2dfe09763f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-957969" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-957969
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (85.3s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-708677 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-708677 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (41.575581425s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-710886 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-710886 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (40.885935805s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-708677
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-710886
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-710886" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-710886
helpers_test.go:175: Cleaning up "first-708677" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-708677
--- PASS: TestMinikubeProfile (85.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (24.38s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-638444 --memory=3072 --mount-string /tmp/TestMountStartserial525418651/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-638444 --memory=3072 --mount-string /tmp/TestMountStartserial525418651/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (23.382731222s)
--- PASS: TestMountStart/serial/StartWithMountFirst (24.38s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-638444 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-638444 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (23.4s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-660657 --memory=3072 --mount-string /tmp/TestMountStartserial525418651/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-660657 --memory=3072 --mount-string /tmp/TestMountStartserial525418651/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (22.395437962s)
--- PASS: TestMountStart/serial/StartWithMountSecond (23.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-660657 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-660657 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-638444 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-660657 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-660657 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-660657
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-660657: (1.370306498s)
--- PASS: TestMountStart/serial/Stop (1.37s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (20.74s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-660657
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-660657: (19.735988657s)
--- PASS: TestMountStart/serial/RestartStopped (20.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-660657 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-660657 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (127.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-048707 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1017 20:13:47.742582   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-048707 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (2m6.873621572s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (127.31s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048707 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048707 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-048707 -- rollout status deployment/busybox: (3.95085797s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048707 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048707 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048707 -- exec busybox-7b57f96db7-l72jq -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048707 -- exec busybox-7b57f96db7-zxjt9 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048707 -- exec busybox-7b57f96db7-l72jq -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048707 -- exec busybox-7b57f96db7-zxjt9 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048707 -- exec busybox-7b57f96db7-l72jq -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048707 -- exec busybox-7b57f96db7-zxjt9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.47s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048707 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048707 -- exec busybox-7b57f96db7-l72jq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048707 -- exec busybox-7b57f96db7-l72jq -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048707 -- exec busybox-7b57f96db7-zxjt9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048707 -- exec busybox-7b57f96db7-zxjt9 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.80s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (46.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-048707 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-048707 -v=5 --alsologtostderr: (46.185891249s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (46.80s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-048707 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.61s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 cp testdata/cp-test.txt multinode-048707:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 ssh -n multinode-048707 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 cp multinode-048707:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1709303854/001/cp-test_multinode-048707.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 ssh -n multinode-048707 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 cp multinode-048707:/home/docker/cp-test.txt multinode-048707-m02:/home/docker/cp-test_multinode-048707_multinode-048707-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 ssh -n multinode-048707 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 ssh -n multinode-048707-m02 "sudo cat /home/docker/cp-test_multinode-048707_multinode-048707-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 cp multinode-048707:/home/docker/cp-test.txt multinode-048707-m03:/home/docker/cp-test_multinode-048707_multinode-048707-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 ssh -n multinode-048707 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 ssh -n multinode-048707-m03 "sudo cat /home/docker/cp-test_multinode-048707_multinode-048707-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 cp testdata/cp-test.txt multinode-048707-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 ssh -n multinode-048707-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 cp multinode-048707-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1709303854/001/cp-test_multinode-048707-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 ssh -n multinode-048707-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 cp multinode-048707-m02:/home/docker/cp-test.txt multinode-048707:/home/docker/cp-test_multinode-048707-m02_multinode-048707.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 ssh -n multinode-048707-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 ssh -n multinode-048707 "sudo cat /home/docker/cp-test_multinode-048707-m02_multinode-048707.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 cp multinode-048707-m02:/home/docker/cp-test.txt multinode-048707-m03:/home/docker/cp-test_multinode-048707-m02_multinode-048707-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 ssh -n multinode-048707-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 ssh -n multinode-048707-m03 "sudo cat /home/docker/cp-test_multinode-048707-m02_multinode-048707-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 cp testdata/cp-test.txt multinode-048707-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 ssh -n multinode-048707-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 cp multinode-048707-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1709303854/001/cp-test_multinode-048707-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 ssh -n multinode-048707-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 cp multinode-048707-m03:/home/docker/cp-test.txt multinode-048707:/home/docker/cp-test_multinode-048707-m03_multinode-048707.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 ssh -n multinode-048707-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 ssh -n multinode-048707 "sudo cat /home/docker/cp-test_multinode-048707-m03_multinode-048707.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 cp multinode-048707-m03:/home/docker/cp-test.txt multinode-048707-m02:/home/docker/cp-test_multinode-048707-m03_multinode-048707-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 ssh -n multinode-048707-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 ssh -n multinode-048707-m02 "sudo cat /home/docker/cp-test_multinode-048707-m03_multinode-048707-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.41s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-048707 node stop m03: (1.615792681s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-048707 status: exit status 7 (445.289721ms)

                                                
                                                
-- stdout --
	multinode-048707
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-048707-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-048707-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-048707 status --alsologtostderr: exit status 7 (450.763058ms)

                                                
                                                
-- stdout --
	multinode-048707
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-048707-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-048707-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:16:00.407514  110958 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:16:00.407639  110958 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:16:00.407649  110958 out.go:374] Setting ErrFile to fd 2...
	I1017 20:16:00.407653  110958 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:16:00.407846  110958 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-75534/.minikube/bin
	I1017 20:16:00.408013  110958 out.go:368] Setting JSON to false
	I1017 20:16:00.408042  110958 mustload.go:65] Loading cluster: multinode-048707
	I1017 20:16:00.408170  110958 notify.go:220] Checking for updates...
	I1017 20:16:00.408432  110958 config.go:182] Loaded profile config "multinode-048707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:16:00.408449  110958 status.go:174] checking status of multinode-048707 ...
	I1017 20:16:00.409173  110958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 20:16:00.409217  110958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 20:16:00.427817  110958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36781
	I1017 20:16:00.428344  110958 main.go:141] libmachine: () Calling .GetVersion
	I1017 20:16:00.428909  110958 main.go:141] libmachine: Using API Version  1
	I1017 20:16:00.428935  110958 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 20:16:00.429369  110958 main.go:141] libmachine: () Calling .GetMachineName
	I1017 20:16:00.429604  110958 main.go:141] libmachine: (multinode-048707) Calling .GetState
	I1017 20:16:00.431426  110958 status.go:371] multinode-048707 host status = "Running" (err=<nil>)
	I1017 20:16:00.431446  110958 host.go:66] Checking if "multinode-048707" exists ...
	I1017 20:16:00.431805  110958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 20:16:00.431863  110958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 20:16:00.446542  110958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37969
	I1017 20:16:00.447118  110958 main.go:141] libmachine: () Calling .GetVersion
	I1017 20:16:00.447660  110958 main.go:141] libmachine: Using API Version  1
	I1017 20:16:00.447722  110958 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 20:16:00.448101  110958 main.go:141] libmachine: () Calling .GetMachineName
	I1017 20:16:00.448340  110958 main.go:141] libmachine: (multinode-048707) Calling .GetIP
	I1017 20:16:00.451533  110958 main.go:141] libmachine: (multinode-048707) DBG | domain multinode-048707 has defined MAC address 52:54:00:2d:6f:94 in network mk-multinode-048707
	I1017 20:16:00.452002  110958 main.go:141] libmachine: (multinode-048707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:6f:94", ip: ""} in network mk-multinode-048707: {Iface:virbr1 ExpiryTime:2025-10-17 21:13:05 +0000 UTC Type:0 Mac:52:54:00:2d:6f:94 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:multinode-048707 Clientid:01:52:54:00:2d:6f:94}
	I1017 20:16:00.452040  110958 main.go:141] libmachine: (multinode-048707) DBG | domain multinode-048707 has defined IP address 192.168.39.170 and MAC address 52:54:00:2d:6f:94 in network mk-multinode-048707
	I1017 20:16:00.452245  110958 host.go:66] Checking if "multinode-048707" exists ...
	I1017 20:16:00.452547  110958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 20:16:00.452613  110958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 20:16:00.466684  110958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41573
	I1017 20:16:00.467112  110958 main.go:141] libmachine: () Calling .GetVersion
	I1017 20:16:00.467833  110958 main.go:141] libmachine: Using API Version  1
	I1017 20:16:00.467853  110958 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 20:16:00.468226  110958 main.go:141] libmachine: () Calling .GetMachineName
	I1017 20:16:00.468467  110958 main.go:141] libmachine: (multinode-048707) Calling .DriverName
	I1017 20:16:00.468667  110958 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:16:00.468690  110958 main.go:141] libmachine: (multinode-048707) Calling .GetSSHHostname
	I1017 20:16:00.471641  110958 main.go:141] libmachine: (multinode-048707) DBG | domain multinode-048707 has defined MAC address 52:54:00:2d:6f:94 in network mk-multinode-048707
	I1017 20:16:00.472036  110958 main.go:141] libmachine: (multinode-048707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:6f:94", ip: ""} in network mk-multinode-048707: {Iface:virbr1 ExpiryTime:2025-10-17 21:13:05 +0000 UTC Type:0 Mac:52:54:00:2d:6f:94 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:multinode-048707 Clientid:01:52:54:00:2d:6f:94}
	I1017 20:16:00.472078  110958 main.go:141] libmachine: (multinode-048707) DBG | domain multinode-048707 has defined IP address 192.168.39.170 and MAC address 52:54:00:2d:6f:94 in network mk-multinode-048707
	I1017 20:16:00.472249  110958 main.go:141] libmachine: (multinode-048707) Calling .GetSSHPort
	I1017 20:16:00.472416  110958 main.go:141] libmachine: (multinode-048707) Calling .GetSSHKeyPath
	I1017 20:16:00.472573  110958 main.go:141] libmachine: (multinode-048707) Calling .GetSSHUsername
	I1017 20:16:00.472760  110958 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/multinode-048707/id_rsa Username:docker}
	I1017 20:16:00.559544  110958 ssh_runner.go:195] Run: systemctl --version
	I1017 20:16:00.566454  110958 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:16:00.584212  110958 kubeconfig.go:125] found "multinode-048707" server: "https://192.168.39.170:8443"
	I1017 20:16:00.584255  110958 api_server.go:166] Checking apiserver status ...
	I1017 20:16:00.584294  110958 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:16:00.603722  110958 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1352/cgroup
	W1017 20:16:00.621500  110958 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1352/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1017 20:16:00.621570  110958 ssh_runner.go:195] Run: ls
	I1017 20:16:00.626928  110958 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8443/healthz ...
	I1017 20:16:00.631821  110958 api_server.go:279] https://192.168.39.170:8443/healthz returned 200:
	ok
	I1017 20:16:00.631848  110958 status.go:463] multinode-048707 apiserver status = Running (err=<nil>)
	I1017 20:16:00.631858  110958 status.go:176] multinode-048707 status: &{Name:multinode-048707 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 20:16:00.631875  110958 status.go:174] checking status of multinode-048707-m02 ...
	I1017 20:16:00.632244  110958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 20:16:00.632284  110958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 20:16:00.646291  110958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42669
	I1017 20:16:00.646785  110958 main.go:141] libmachine: () Calling .GetVersion
	I1017 20:16:00.647306  110958 main.go:141] libmachine: Using API Version  1
	I1017 20:16:00.647329  110958 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 20:16:00.647735  110958 main.go:141] libmachine: () Calling .GetMachineName
	I1017 20:16:00.648080  110958 main.go:141] libmachine: (multinode-048707-m02) Calling .GetState
	I1017 20:16:00.650262  110958 status.go:371] multinode-048707-m02 host status = "Running" (err=<nil>)
	I1017 20:16:00.650281  110958 host.go:66] Checking if "multinode-048707-m02" exists ...
	I1017 20:16:00.650633  110958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 20:16:00.650677  110958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 20:16:00.664797  110958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35689
	I1017 20:16:00.665287  110958 main.go:141] libmachine: () Calling .GetVersion
	I1017 20:16:00.665760  110958 main.go:141] libmachine: Using API Version  1
	I1017 20:16:00.665786  110958 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 20:16:00.666149  110958 main.go:141] libmachine: () Calling .GetMachineName
	I1017 20:16:00.666405  110958 main.go:141] libmachine: (multinode-048707-m02) Calling .GetIP
	I1017 20:16:00.669813  110958 main.go:141] libmachine: (multinode-048707-m02) DBG | domain multinode-048707-m02 has defined MAC address 52:54:00:b5:97:32 in network mk-multinode-048707
	I1017 20:16:00.670165  110958 main.go:141] libmachine: (multinode-048707-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:97:32", ip: ""} in network mk-multinode-048707: {Iface:virbr1 ExpiryTime:2025-10-17 21:14:30 +0000 UTC Type:0 Mac:52:54:00:b5:97:32 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-048707-m02 Clientid:01:52:54:00:b5:97:32}
	I1017 20:16:00.670208  110958 main.go:141] libmachine: (multinode-048707-m02) DBG | domain multinode-048707-m02 has defined IP address 192.168.39.208 and MAC address 52:54:00:b5:97:32 in network mk-multinode-048707
	I1017 20:16:00.670395  110958 host.go:66] Checking if "multinode-048707-m02" exists ...
	I1017 20:16:00.670806  110958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 20:16:00.670857  110958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 20:16:00.684943  110958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35721
	I1017 20:16:00.685481  110958 main.go:141] libmachine: () Calling .GetVersion
	I1017 20:16:00.686060  110958 main.go:141] libmachine: Using API Version  1
	I1017 20:16:00.686087  110958 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 20:16:00.686486  110958 main.go:141] libmachine: () Calling .GetMachineName
	I1017 20:16:00.686688  110958 main.go:141] libmachine: (multinode-048707-m02) Calling .DriverName
	I1017 20:16:00.686895  110958 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:16:00.686919  110958 main.go:141] libmachine: (multinode-048707-m02) Calling .GetSSHHostname
	I1017 20:16:00.689890  110958 main.go:141] libmachine: (multinode-048707-m02) DBG | domain multinode-048707-m02 has defined MAC address 52:54:00:b5:97:32 in network mk-multinode-048707
	I1017 20:16:00.690435  110958 main.go:141] libmachine: (multinode-048707-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:97:32", ip: ""} in network mk-multinode-048707: {Iface:virbr1 ExpiryTime:2025-10-17 21:14:30 +0000 UTC Type:0 Mac:52:54:00:b5:97:32 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-048707-m02 Clientid:01:52:54:00:b5:97:32}
	I1017 20:16:00.690467  110958 main.go:141] libmachine: (multinode-048707-m02) DBG | domain multinode-048707-m02 has defined IP address 192.168.39.208 and MAC address 52:54:00:b5:97:32 in network mk-multinode-048707
	I1017 20:16:00.690823  110958 main.go:141] libmachine: (multinode-048707-m02) Calling .GetSSHPort
	I1017 20:16:00.691005  110958 main.go:141] libmachine: (multinode-048707-m02) Calling .GetSSHKeyPath
	I1017 20:16:00.691165  110958 main.go:141] libmachine: (multinode-048707-m02) Calling .GetSSHUsername
	I1017 20:16:00.691344  110958 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-75534/.minikube/machines/multinode-048707-m02/id_rsa Username:docker}
	I1017 20:16:00.772425  110958 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:16:00.790670  110958 status.go:176] multinode-048707-m02 status: &{Name:multinode-048707-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1017 20:16:00.790712  110958 status.go:174] checking status of multinode-048707-m03 ...
	I1017 20:16:00.791289  110958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 20:16:00.791351  110958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 20:16:00.805410  110958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37615
	I1017 20:16:00.805885  110958 main.go:141] libmachine: () Calling .GetVersion
	I1017 20:16:00.806304  110958 main.go:141] libmachine: Using API Version  1
	I1017 20:16:00.806329  110958 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 20:16:00.806718  110958 main.go:141] libmachine: () Calling .GetMachineName
	I1017 20:16:00.806899  110958 main.go:141] libmachine: (multinode-048707-m03) Calling .GetState
	I1017 20:16:00.808794  110958 status.go:371] multinode-048707-m03 host status = "Stopped" (err=<nil>)
	I1017 20:16:00.808807  110958 status.go:384] host is not running, skipping remaining checks
	I1017 20:16:00.808813  110958 status.go:176] multinode-048707-m03 status: &{Name:multinode-048707-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.51s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (37.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-048707 node start m03 -v=5 --alsologtostderr: (37.008802373s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (37.66s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (328.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-048707
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-048707
E1017 20:18:47.751017   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-048707: (2m53.052891823s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-048707 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-048707 --wait=true -v=5 --alsologtostderr: (2m35.340720943s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-048707
--- PASS: TestMultiNode/serial/RestartKeepsNodes (328.50s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-048707 node delete m03: (2.228291553s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.80s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (159.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 stop
E1017 20:23:47.751034   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-048707 stop: (2m39.143514346s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-048707 status: exit status 7 (94.768474ms)

                                                
                                                
-- stdout --
	multinode-048707
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-048707-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-048707 status --alsologtostderr: exit status 7 (82.476364ms)

                                                
                                                
-- stdout --
	multinode-048707
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-048707-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:24:49.048014  114171 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:24:49.048269  114171 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:24:49.048277  114171 out.go:374] Setting ErrFile to fd 2...
	I1017 20:24:49.048281  114171 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:24:49.048437  114171 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-75534/.minikube/bin
	I1017 20:24:49.048613  114171 out.go:368] Setting JSON to false
	I1017 20:24:49.048640  114171 mustload.go:65] Loading cluster: multinode-048707
	I1017 20:24:49.048745  114171 notify.go:220] Checking for updates...
	I1017 20:24:49.048996  114171 config.go:182] Loaded profile config "multinode-048707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:24:49.049008  114171 status.go:174] checking status of multinode-048707 ...
	I1017 20:24:49.049394  114171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 20:24:49.049433  114171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 20:24:49.063113  114171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35021
	I1017 20:24:49.063599  114171 main.go:141] libmachine: () Calling .GetVersion
	I1017 20:24:49.064162  114171 main.go:141] libmachine: Using API Version  1
	I1017 20:24:49.064177  114171 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 20:24:49.064527  114171 main.go:141] libmachine: () Calling .GetMachineName
	I1017 20:24:49.064706  114171 main.go:141] libmachine: (multinode-048707) Calling .GetState
	I1017 20:24:49.066502  114171 status.go:371] multinode-048707 host status = "Stopped" (err=<nil>)
	I1017 20:24:49.066517  114171 status.go:384] host is not running, skipping remaining checks
	I1017 20:24:49.066522  114171 status.go:176] multinode-048707 status: &{Name:multinode-048707 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 20:24:49.066566  114171 status.go:174] checking status of multinode-048707-m02 ...
	I1017 20:24:49.066866  114171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 20:24:49.066904  114171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 20:24:49.080782  114171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33261
	I1017 20:24:49.081169  114171 main.go:141] libmachine: () Calling .GetVersion
	I1017 20:24:49.081608  114171 main.go:141] libmachine: Using API Version  1
	I1017 20:24:49.081629  114171 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 20:24:49.081944  114171 main.go:141] libmachine: () Calling .GetMachineName
	I1017 20:24:49.082138  114171 main.go:141] libmachine: (multinode-048707-m02) Calling .GetState
	I1017 20:24:49.083965  114171 status.go:371] multinode-048707-m02 host status = "Stopped" (err=<nil>)
	I1017 20:24:49.083981  114171 status.go:384] host is not running, skipping remaining checks
	I1017 20:24:49.084001  114171 status.go:176] multinode-048707-m02 status: &{Name:multinode-048707-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (159.32s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (87.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-048707 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-048707 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m27.005803741s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048707 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (87.64s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (42.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-048707
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-048707-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-048707-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (68.767028ms)

                                                
                                                
-- stdout --
	* [multinode-048707-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21753
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21753-75534/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-75534/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-048707-m02' is duplicated with machine name 'multinode-048707-m02' in profile 'multinode-048707'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-048707-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1017 20:26:50.826815   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-048707-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (40.989759163s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-048707
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-048707: exit status 80 (226.377061ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-048707 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-048707-m03 already exists in multinode-048707-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-048707-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (42.19s)

                                                
                                    
x
+
TestScheduledStopUnix (114.06s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-651800 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-651800 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (42.25529563s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-651800 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-651800 -n scheduled-stop-651800
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-651800 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1017 20:29:52.349907   79439 retry.go:31] will retry after 78.56µs: open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/scheduled-stop-651800/pid: no such file or directory
I1017 20:29:52.351127   79439 retry.go:31] will retry after 169.452µs: open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/scheduled-stop-651800/pid: no such file or directory
I1017 20:29:52.352313   79439 retry.go:31] will retry after 270.544µs: open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/scheduled-stop-651800/pid: no such file or directory
I1017 20:29:52.353494   79439 retry.go:31] will retry after 385.408µs: open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/scheduled-stop-651800/pid: no such file or directory
I1017 20:29:52.354626   79439 retry.go:31] will retry after 711.133µs: open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/scheduled-stop-651800/pid: no such file or directory
I1017 20:29:52.355791   79439 retry.go:31] will retry after 613.778µs: open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/scheduled-stop-651800/pid: no such file or directory
I1017 20:29:52.356981   79439 retry.go:31] will retry after 654.129µs: open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/scheduled-stop-651800/pid: no such file or directory
I1017 20:29:52.358124   79439 retry.go:31] will retry after 2.452684ms: open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/scheduled-stop-651800/pid: no such file or directory
I1017 20:29:52.361366   79439 retry.go:31] will retry after 3.470099ms: open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/scheduled-stop-651800/pid: no such file or directory
I1017 20:29:52.365618   79439 retry.go:31] will retry after 4.99314ms: open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/scheduled-stop-651800/pid: no such file or directory
I1017 20:29:52.370894   79439 retry.go:31] will retry after 5.231745ms: open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/scheduled-stop-651800/pid: no such file or directory
I1017 20:29:52.377133   79439 retry.go:31] will retry after 4.705811ms: open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/scheduled-stop-651800/pid: no such file or directory
I1017 20:29:52.382379   79439 retry.go:31] will retry after 18.829693ms: open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/scheduled-stop-651800/pid: no such file or directory
I1017 20:29:52.401662   79439 retry.go:31] will retry after 15.919892ms: open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/scheduled-stop-651800/pid: no such file or directory
I1017 20:29:52.417943   79439 retry.go:31] will retry after 36.628823ms: open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/scheduled-stop-651800/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-651800 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-651800 -n scheduled-stop-651800
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-651800
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-651800 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-651800
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-651800: exit status 7 (77.765248ms)

                                                
                                                
-- stdout --
	scheduled-stop-651800
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-651800 -n scheduled-stop-651800
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-651800 -n scheduled-stop-651800: exit status 7 (68.56641ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-651800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-651800
--- PASS: TestScheduledStopUnix (114.06s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (165.15s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3265359571 start -p running-upgrade-069715 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3265359571 start -p running-upgrade-069715 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m46.446088509s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-069715 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-069715 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (56.728628414s)
helpers_test.go:175: Cleaning up "running-upgrade-069715" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-069715
E1017 20:33:47.741541   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-069715: (1.210424964s)
--- PASS: TestRunningBinaryUpgrade (165.15s)

                                                
                                    
x
+
TestKubernetesUpgrade (269.72s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-081973 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-081973 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m4.50214102s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-081973
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-081973: (2.161975921s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-081973 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-081973 status --format={{.Host}}: exit status 7 (94.198015ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-081973 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-081973 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m13.313621906s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-081973 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-081973 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-081973 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 106 (100.800567ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-081973] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21753
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21753-75534/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-75534/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-081973
	    minikube start -p kubernetes-upgrade-081973 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0819732 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-081973 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-081973 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-081973 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (2m8.427620369s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-081973" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-081973
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-081973: (1.05010783s)
--- PASS: TestKubernetesUpgrade (269.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-064947 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-064947 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (79.528411ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-064947] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21753
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21753-75534/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-75534/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (85.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-064947 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-064947 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m25.040329049s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-064947 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (85.33s)

                                                
                                    
x
+
TestPause/serial/Start (105.1s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-037697 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-037697 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m45.09864911s)
--- PASS: TestPause/serial/Start (105.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (48.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-064947 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-064947 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (47.786045555s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-064947 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-064947 status -o json: exit status 2 (288.398996ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-064947","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-064947
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (48.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (23.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-064947 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-064947 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (23.89791962s)
--- PASS: TestNoKubernetes/serial/Start (23.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-064947 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-064947 "sudo systemctl is-active --quiet service kubelet": exit status 1 (201.22991ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (4.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-amd64 profile list: (3.309784446s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:181: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (1.127735745s)
--- PASS: TestNoKubernetes/serial/ProfileList (4.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-064947
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-064947: (1.486702348s)
--- PASS: TestNoKubernetes/serial/Stop (1.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (20.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-064947 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-064947 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (20.342009668s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (20.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (102.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2868535503 start -p stopped-upgrade-663746 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2868535503 start -p stopped-upgrade-663746 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (58.951337949s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2868535503 -p stopped-upgrade-663746 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2868535503 -p stopped-upgrade-663746 stop: (1.646822596s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-663746 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-663746 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (41.932862454s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (102.53s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (69.24s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-037697 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-037697 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m9.205313919s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (69.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-064947 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-064947 "sudo systemctl is-active --quiet service kubelet": exit status 1 (217.935399ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestPause/serial/Pause (1.32s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-037697 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-037697 --alsologtostderr -v=5: (1.324741372s)
--- PASS: TestPause/serial/Pause (1.32s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.29s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-037697 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-037697 --output=json --layout=cluster: exit status 2 (289.886267ms)

                                                
                                                
-- stdout --
	{"Name":"pause-037697","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-037697","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.29s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.9s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-037697 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.90s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.99s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-037697 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.99s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.88s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-037697 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.88s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (2.61s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (2.607415623s)
--- PASS: TestPause/serial/VerifyDeletedResources (2.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-385347 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-385347 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (120.669024ms)

                                                
                                                
-- stdout --
	* [false-385347] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21753
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21753-75534/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-75534/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:35:11.164375  122400 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:35:11.164663  122400 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:35:11.164675  122400 out.go:374] Setting ErrFile to fd 2...
	I1017 20:35:11.164680  122400 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:35:11.164888  122400 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-75534/.minikube/bin
	I1017 20:35:11.165364  122400 out.go:368] Setting JSON to false
	I1017 20:35:11.166235  122400 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":11862,"bootTime":1760721449,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 20:35:11.166334  122400 start.go:141] virtualization: kvm guest
	I1017 20:35:11.168793  122400 out.go:179] * [false-385347] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 20:35:11.173086  122400 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 20:35:11.173085  122400 notify.go:220] Checking for updates...
	I1017 20:35:11.174627  122400 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:35:11.176157  122400 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-75534/kubeconfig
	I1017 20:35:11.177277  122400 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-75534/.minikube
	I1017 20:35:11.178311  122400 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 20:35:11.179269  122400 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:35:11.180784  122400 config.go:182] Loaded profile config "force-systemd-flag-449009": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:35:11.180893  122400 config.go:182] Loaded profile config "kubernetes-upgrade-081973": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:35:11.181008  122400 config.go:182] Loaded profile config "stopped-upgrade-663746": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1017 20:35:11.181110  122400 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:35:11.226708  122400 out.go:179] * Using the kvm2 driver based on user configuration
	I1017 20:35:11.227881  122400 start.go:305] selected driver: kvm2
	I1017 20:35:11.227895  122400 start.go:925] validating driver "kvm2" against <nil>
	I1017 20:35:11.227906  122400 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:35:11.229721  122400 out.go:203] 
	W1017 20:35:11.230753  122400 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1017 20:35:11.231917  122400 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-385347 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-385347

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-385347

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-385347

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-385347

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-385347

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-385347

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-385347

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-385347

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-385347

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-385347

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-385347"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-385347"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-385347"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-385347

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-385347"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-385347"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-385347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-385347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-385347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-385347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-385347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-385347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-385347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-385347" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-385347"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-385347"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-385347"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-385347"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-385347"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-385347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-385347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-385347" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-385347"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-385347"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-385347"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-385347"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-385347"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21753-75534/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 17 Oct 2025 20:33:22 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.232:8443
name: kubernetes-upgrade-081973
contexts:
- context:
cluster: kubernetes-upgrade-081973
extensions:
- extension:
last-update: Fri, 17 Oct 2025 20:33:22 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-081973
name: kubernetes-upgrade-081973
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-081973
user:
client-certificate: /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/kubernetes-upgrade-081973/client.crt
client-key: /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/kubernetes-upgrade-081973/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-385347

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-385347"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-385347"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-385347"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-385347"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-385347"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-385347"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-385347"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-385347"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-385347"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-385347"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-385347"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-385347"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-385347"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-385347"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-385347"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-385347"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-385347"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-385347"

                                                
                                                
----------------------- debugLogs end: false-385347 [took: 3.569427561s] --------------------------------
helpers_test.go:175: Cleaning up "false-385347" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-385347
--- PASS: TestNetworkPlugins/group/false (3.87s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.27s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-663746
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-663746: (1.267369222s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (132.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-358634 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-358634 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (2m12.701688294s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (132.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (142.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-983636 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-983636 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (2m22.50278699s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (142.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (97.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-967470 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-967470 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m37.34787719s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (97.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-358634 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [000d8dd3-6b71-4dcc-90c7-7e3b0aa6372a] Pending
helpers_test.go:352: "busybox" [000d8dd3-6b71-4dcc-90c7-7e3b0aa6372a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [000d8dd3-6b71-4dcc-90c7-7e3b0aa6372a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004264331s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-358634 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-358634 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-358634 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.090049581s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-358634 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (88.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-358634 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-358634 --alsologtostderr -v=3: (1m28.886045224s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (88.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-967470 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e31feb80-d06d-4737-bbee-2efb04f9d4ea] Pending
helpers_test.go:352: "busybox" [e31feb80-d06d-4737-bbee-2efb04f9d4ea] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e31feb80-d06d-4737-bbee-2efb04f9d4ea] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003593029s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-967470 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-967470 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-967470 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (87.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-967470 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-967470 --alsologtostderr -v=3: (1m27.280426446s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (87.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-983636 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [03c93288-2afd-44a6-9c94-c5d8a33a6fb2] Pending
helpers_test.go:352: "busybox" [03c93288-2afd-44a6-9c94-c5d8a33a6fb2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [03c93288-2afd-44a6-9c94-c5d8a33a6fb2] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.00508885s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-983636 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-983636 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-983636 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (84.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-983636 --alsologtostderr -v=3
E1017 20:38:47.742481   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-983636 --alsologtostderr -v=3: (1m24.66161488s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (84.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-358634 -n old-k8s-version-358634
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-358634 -n old-k8s-version-358634: exit status 7 (77.461429ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-358634 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (48.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-358634 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-358634 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (48.351998571s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-358634 -n old-k8s-version-358634
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (48.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-967470 -n embed-certs-967470
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-967470 -n embed-certs-967470: exit status 7 (77.450656ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-967470 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (49.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-967470 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-967470 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (49.325197804s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-967470 -n embed-certs-967470
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (49.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-983636 -n no-preload-983636
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-983636 -n no-preload-983636: exit status 7 (104.057547ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-983636 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (74.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-983636 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-983636 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m14.160190106s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-983636 -n no-preload-983636
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (74.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-8g7fk" [b036c9a8-0966-4acf-b194-0eb6d45ef239] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-8g7fk" [b036c9a8-0966-4acf-b194-0eb6d45ef239] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.005243379s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-8g7fk" [b036c9a8-0966-4acf-b194-0eb6d45ef239] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004789992s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-358634 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-358634 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-358634 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-358634 --alsologtostderr -v=1: (1.144496645s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-358634 -n old-k8s-version-358634
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-358634 -n old-k8s-version-358634: exit status 2 (305.656226ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-358634 -n old-k8s-version-358634
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-358634 -n old-k8s-version-358634: exit status 2 (315.086535ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-358634 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p old-k8s-version-358634 --alsologtostderr -v=1: (1.033543283s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-358634 -n old-k8s-version-358634
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-358634 -n old-k8s-version-358634
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mwzf4" [79c48f56-fd9b-4b89-9239-206947f3139f] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mwzf4" [79c48f56-fd9b-4b89-9239-206947f3139f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.006705997s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (63.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-438402 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-438402 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m3.107019426s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (63.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mwzf4" [79c48f56-fd9b-4b89-9239-206947f3139f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005465411s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-967470 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-967470 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-967470 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-967470 --alsologtostderr -v=1: (1.99690349s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-967470 -n embed-certs-967470
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-967470 -n embed-certs-967470: exit status 2 (352.036775ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-967470 -n embed-certs-967470
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-967470 -n embed-certs-967470: exit status 2 (347.973269ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-967470 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-967470 --alsologtostderr -v=1: (1.150657236s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-967470 -n embed-certs-967470
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-967470 -n embed-certs-967470
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (57.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-890722 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-890722 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (57.88776085s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (57.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (112.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-385347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-385347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m52.496726654s)
--- PASS: TestNetworkPlugins/group/auto/Start (112.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4wvj2" [036dfb1c-8348-452a-a8c4-6a5ff3dba6ca] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004950844s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4wvj2" [036dfb1c-8348-452a-a8c4-6a5ff3dba6ca] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004954189s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-983636 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-983636 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-983636 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-983636 --alsologtostderr -v=1: (1.487720764s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-983636 -n no-preload-983636
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-983636 -n no-preload-983636: exit status 2 (294.241659ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-983636 -n no-preload-983636
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-983636 -n no-preload-983636: exit status 2 (309.666483ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-983636 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p no-preload-983636 --alsologtostderr -v=1: (1.425195079s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-983636 -n no-preload-983636
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-983636 -n no-preload-983636
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (85.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-385347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-385347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m25.327828671s)
--- PASS: TestNetworkPlugins/group/calico/Start (85.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-438402 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e383c515-7191-4ae4-a810-ae55ac742faa] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e383c515-7191-4ae4-a810-ae55ac742faa] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.005034116s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-438402 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-438402 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-438402 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.431421507s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-438402 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (82.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-438402 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-438402 --alsologtostderr -v=3: (1m22.829719405s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (82.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-890722 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-890722 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.034690689s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-890722 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-890722 --alsologtostderr -v=3: (10.96593186s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-890722 -n newest-cni-890722
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-890722 -n newest-cni-890722: exit status 7 (81.534569ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-890722 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-890722 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-890722 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (37.295975816s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-890722 -n newest-cni-890722
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-890722 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-890722 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-890722 --alsologtostderr -v=1: (1.089693952s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-890722 -n newest-cni-890722
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-890722 -n newest-cni-890722: exit status 2 (358.083261ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-890722 -n newest-cni-890722
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-890722 -n newest-cni-890722: exit status 2 (378.314518ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-890722 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-890722 --alsologtostderr -v=1: (1.217140799s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-890722 -n newest-cni-890722
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-890722 -n newest-cni-890722
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-385347 "pgrep -a kubelet"
I1017 20:42:46.829759   79439 config.go:182] Loaded profile config "auto-385347": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-385347 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kz54j" [c291a724-6dc8-4a5a-961c-eab332e53cb5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1017 20:42:47.269215   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/old-k8s-version-358634/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:42:47.275686   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/old-k8s-version-358634/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:42:47.287186   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/old-k8s-version-358634/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:42:47.308747   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/old-k8s-version-358634/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:42:47.350297   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/old-k8s-version-358634/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:42:47.431827   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/old-k8s-version-358634/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:42:47.593458   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/old-k8s-version-358634/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:42:47.915252   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/old-k8s-version-358634/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-kz54j" [c291a724-6dc8-4a5a-961c-eab332e53cb5] Running
E1017 20:42:52.399645   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/old-k8s-version-358634/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:42:57.521020   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/old-k8s-version-358634/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004082989s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (73.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-385347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1017 20:42:48.556661   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/old-k8s-version-358634/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:42:49.838104   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/old-k8s-version-358634/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-385347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m13.805395765s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (73.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-385347 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-385347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-385347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-kpmsc" [f2b17874-364a-4beb-90f3-2ae30105e573] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-kpmsc" [f2b17874-364a-4beb-90f3-2ae30105e573] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.009728121s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-385347 "pgrep -a kubelet"
I1017 20:43:06.638307   79439 config.go:182] Loaded profile config "calico-385347": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-385347 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-q4bwn" [17e9f669-e6ec-4ca4-8495-ad0bc58d309e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1017 20:43:07.763161   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/old-k8s-version-358634/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-q4bwn" [17e9f669-e6ec-4ca4-8495-ad0bc58d309e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.142699517s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-438402 -n default-k8s-diff-port-438402
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-438402 -n default-k8s-diff-port-438402: exit status 7 (82.256475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-438402 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-438402 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-438402 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (45.516017576s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-438402 -n default-k8s-diff-port-438402
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (84.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-385347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-385347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m24.773268961s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (84.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-385347 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-385347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-385347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (91.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-385347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1017 20:43:38.626997   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/no-preload-983636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:43:47.741724   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/addons-768633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:43:48.869265   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/no-preload-983636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-385347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m31.950294075s)
--- PASS: TestNetworkPlugins/group/flannel/Start (91.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-92bpd" [0db35b5b-d43b-4d9c-9225-db342a26a5ae] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-92bpd" [0db35b5b-d43b-4d9c-9225-db342a26a5ae] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.005967387s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-385347 "pgrep -a kubelet"
I1017 20:44:02.459401   79439 config.go:182] Loaded profile config "custom-flannel-385347": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-385347 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hv4bm" [f81deb9e-d53d-4fd4-8a40-f9874c3be920] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hv4bm" [f81deb9e-d53d-4fd4-8a40-f9874c3be920] Running
E1017 20:44:09.206603   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/old-k8s-version-358634/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004263859s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-92bpd" [0db35b5b-d43b-4d9c-9225-db342a26a5ae] Running
E1017 20:44:09.351096   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/no-preload-983636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003943542s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-438402 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-438402 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-438402 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-438402 --alsologtostderr -v=1: (1.139318103s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-438402 -n default-k8s-diff-port-438402
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-438402 -n default-k8s-diff-port-438402: exit status 2 (273.610028ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-438402 -n default-k8s-diff-port-438402
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-438402 -n default-k8s-diff-port-438402: exit status 2 (297.127115ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-438402 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p default-k8s-diff-port-438402 --alsologtostderr -v=1: (1.043635979s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-438402 -n default-k8s-diff-port-438402
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-438402 -n default-k8s-diff-port-438402
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-385347 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-385347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-385347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (90.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-385347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-385347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m30.891859761s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (90.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (93.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-385347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-385347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m33.425895176s)
--- PASS: TestNetworkPlugins/group/bridge/Start (93.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-6t2f2" [4266355d-983d-45a1-a370-bfe7d409a39d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006788791s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-385347 "pgrep -a kubelet"
I1017 20:44:45.991811   79439 config.go:182] Loaded profile config "kindnet-385347": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-385347 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fc9br" [6cc8a070-d2cb-4c0c-8135-f628d324db40] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1017 20:44:50.313192   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/no-preload-983636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-fc9br" [6cc8a070-d2cb-4c0c-8135-f628d324db40] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.004938501s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-385347 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-385347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-385347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-lfq74" [56f733e0-9361-45cd-8211-863a3e7d023a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004839945s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-385347 "pgrep -a kubelet"
I1017 20:45:15.191471   79439 config.go:182] Loaded profile config "flannel-385347": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-385347 replace --force -f testdata/netcat-deployment.yaml
I1017 20:45:15.470907   79439 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-48mrq" [a0ce3b93-3c3f-49c6-bdcd-e7aa35f9b796] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-48mrq" [a0ce3b93-3c3f-49c6-bdcd-e7aa35f9b796] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.00436761s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-385347 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-385347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-385347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-385347 "pgrep -a kubelet"
I1017 20:45:50.746294   79439 config.go:182] Loaded profile config "enable-default-cni-385347": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-385347 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8qvt5" [e2b5869c-1c8e-4da0-8afd-c71ecec8eff7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8qvt5" [e2b5869c-1c8e-4da0-8afd-c71ecec8eff7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.00554932s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-385347 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-385347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-385347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-385347 "pgrep -a kubelet"
I1017 20:46:07.258697   79439 config.go:182] Loaded profile config "bridge-385347": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-385347 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zm4rk" [0b98c3e5-3bc3-4e36-8c86-fe1e866aaf11] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1017 20:46:12.234592   79439 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/no-preload-983636/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-zm4rk" [0b98c3e5-3bc3-4e36-8c86-fe1e866aaf11] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004994505s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-385347 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-385347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-385347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (30/270)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.34s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-768633 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.34s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-347482" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-347482
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-385347 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-385347

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-385347

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-385347

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-385347

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-385347

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-385347

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-385347

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-385347

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-385347

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-385347

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-385347"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-385347"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-385347"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-385347

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-385347"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-385347"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-385347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-385347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-385347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-385347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-385347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-385347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-385347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-385347" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-385347"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-385347"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-385347"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-385347"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-385347"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-385347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-385347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-385347" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-385347"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-385347"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-385347"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-385347"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-385347"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21753-75534/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 17 Oct 2025 20:33:22 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.232:8443
name: kubernetes-upgrade-081973
contexts:
- context:
cluster: kubernetes-upgrade-081973
extensions:
- extension:
last-update: Fri, 17 Oct 2025 20:33:22 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-081973
name: kubernetes-upgrade-081973
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-081973
user:
client-certificate: /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/kubernetes-upgrade-081973/client.crt
client-key: /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/kubernetes-upgrade-081973/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-385347

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-385347"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-385347"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-385347"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-385347"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-385347"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-385347"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-385347"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-385347"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-385347"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-385347"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-385347"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-385347"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-385347"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-385347"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-385347"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-385347"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-385347"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-385347"

                                                
                                                
----------------------- debugLogs end: kubenet-385347 [took: 3.052723077s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-385347" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-385347
--- SKIP: TestNetworkPlugins/group/kubenet (3.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-385347 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-385347

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-385347

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-385347

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-385347

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-385347

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-385347

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-385347

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-385347

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-385347

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-385347

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-385347"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-385347"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-385347"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-385347

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-385347"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-385347"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-385347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-385347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-385347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-385347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-385347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-385347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-385347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-385347" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-385347"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-385347"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-385347"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-385347"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-385347"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-385347

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-385347

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-385347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-385347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-385347

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-385347

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-385347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-385347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-385347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-385347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-385347" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-385347"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-385347"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-385347"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-385347"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-385347"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21753-75534/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 17 Oct 2025 20:35:16 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.61.134:8443
name: force-systemd-flag-449009
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21753-75534/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 17 Oct 2025 20:33:22 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.232:8443
name: kubernetes-upgrade-081973
contexts:
- context:
cluster: force-systemd-flag-449009
extensions:
- extension:
last-update: Fri, 17 Oct 2025 20:35:16 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: force-systemd-flag-449009
name: force-systemd-flag-449009
- context:
cluster: kubernetes-upgrade-081973
extensions:
- extension:
last-update: Fri, 17 Oct 2025 20:33:22 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-081973
name: kubernetes-upgrade-081973
current-context: force-systemd-flag-449009
kind: Config
users:
- name: force-systemd-flag-449009
user:
client-certificate: /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/force-systemd-flag-449009/client.crt
client-key: /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/force-systemd-flag-449009/client.key
- name: kubernetes-upgrade-081973
user:
client-certificate: /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/kubernetes-upgrade-081973/client.crt
client-key: /home/jenkins/minikube-integration/21753-75534/.minikube/profiles/kubernetes-upgrade-081973/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-385347

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-385347"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-385347"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-385347"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-385347"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-385347"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-385347"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-385347"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-385347"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-385347"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-385347"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-385347"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-385347"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-385347"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-385347"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-385347"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-385347"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-385347"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-385347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-385347"

                                                
                                                
----------------------- debugLogs end: cilium-385347 [took: 5.811489959s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-385347" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-385347
--- SKIP: TestNetworkPlugins/group/cilium (5.99s)

                                                
                                    
Copied to clipboard